EU’s Digital Services Act

You are currently viewing EU’s Digital Services Act

EU’s Digital Services Act

How the law changes the internet

Mohtashim Javed

On August 25, the European Union’s Digital Services Act (DSA), one of the most comprehensive pieces of internet regulation, came into effect, fully applying to Facebook, Instagram and a number of other tech platforms and services. This unprecedented regulation forces social media companies to better police the content they deliver within the EU. Even though this new law was passed in the EU, it is highly likely that it will have far-reaching global effects as companies adjust their policies to comply.

Google, Facebook, TikTok and other Big Tech companies operating in the European Union are facing one of the most far-reaching efforts to clean up what people encounter online. In this regard, the first phase of the 27-nation bloc’s groundbreaking new digital rules, the Digital Services Act (DSA), by which the EU hopes to rein in big tech platforms to ensure online safety, took effect on August 25. The groundbreaking legislation puts forward new rules to ensure that tech giants, put in force measures to moderate illegal content and prevent, for example, the promotion of hate speech on their online platforms. The overarching goal of the DSA is to foster safer online environments. Under the new rules, online platforms must implement ways to prevent and remove posts containing illegal goods, services, or content while simultaneously giving users the means to report this type of content.
What is DSA?
As defined by the EU Commission, the DSA is “a set of common rules on intermediaries’ obligations and accountability across the single market,” which ensures higher protection to all EU users. The first proposal of the DSA was brought forward on December 15, 2020, but it didn’t become effective until November 16, 2022. This laid out the rules that all online platforms and search engines that operate in the EU will have to abide by. The DSA will now apply to any digital operation serving the EU, forcing them to be legally accountable for everything from fake news to manipulation of shoppers, Russian propaganda and criminal activity including child abuse. The rules all platforms have to comply with include:
1. Reporting and removing illegal content
Users should be easily able to report any misuse of the service, and specialized ‘trusted flaggers’ should be employed to seek out and remove illegal content.
2. Explaining moderation policies
Platforms should provide up-to-date terms and conditions that lay out their rights to restrict user access, which should be easy to understand. This includes the policies, procedures, measures, and tools for content moderation, highlighting whether they are human or algorithmic. They should inform users if their service has been limited and why.
3. Ensuring seller legitimacy
If users can purchase products or services through the app, then it is the platform’s responsibility to ensure they are legitimate. They should also perform random checks for their legality.
4. Transparency over ads
Platforms should make it clear when they are displaying an advert rather than regular content.
5. No ‘dark patterns’
The interfaces used on platforms should not be designed to mislead users into, for example, buying products or giving away personal information.
6. Transparency around recommendations
Platforms cannot use people’s data to make recommendations for search terms and other user choices and must disclose how they make recommendations.
7. Crisis management
The activities of tech companies during crises will be able to be analyzed by regulators and they can place appropriate measures to protect fundamental rights.
8. Child protection
Platforms must take measures to protect the privacy and security of children and cannot show them adverts based on their personal data.
9. Annual risk assessments
Platforms must take part in annual risk assessments that check for potential problems with the spread of illegal content, misinformation, cyber-violence against women, harm to children, and adverse effects on fundamental rights. They will also have to take measures to reduce these risks, like implementing parental controls or age verification systems.
10. User rights
Users should be able to make complaints to the platform and their national authority, as well as settle disputes outside of court.
In a nutshell, DSA rules stipulate that:

  • Users will get clear information on why certain information is recommended to them, and they will have the right to opt-out from recommendations based on profiling.
  • Users will be able to report illegal content easily and platforms have to process such reports diligently.
  • Advertisements cannot be displayed based on sensitive data of the user, such as ethnic origin, political opinions or sexual orientation.
  • Platforms need to label all ads and inform users who is promoting them.
  • Platforms must provide an easily understandable, plain-language summary of their terms and conditions, in the languages of the EU Member States where they operate and enforce them diligently and non-arbitrarily.
  • Platforms will have to redesign their systems to ensure a high level of privacy, security and safety for minors, and advertising based on profiling towards children is forbidden.
  • Platforms will have to redesign their services, including their interfaces, recommender systems, terms and conditions, to mitigate the risk of negative effects on mental health.
  • Platforms and search engines need to address risks linked to the dissemination of illegal content and to negative effects on freedom of expression and information.
  • Platforms need to put in place measures to address the spread of disinformation and inauthentic use of their service.
  • Platforms’ DSA obligations must be externally and independently audited.
  • They will have to give access to publicly available data to researchers – a special mechanism for vetted researchers will be established.
  • They have to publish repositories of all the ads served on their interface.
  • Platforms need to publish transparency reports on content moderation decisions and risk management.

Global impact
Europe’s changes could have global effects. Wikipedia is tweaking some policies and modifying its terms of service to provide more information on “problematic users and content”. Those alterations will not be limited to Europe, according to Wikimedia Foundation, which hosts the community-powered encyclopaedia.
“The rules and processes that govern Wikimedia projects worldwide, including any changes in response to the DSA, are as universal as possible. This means that changes to our Terms of Use and Office Actions Policy will be implemented globally,” it said in a statement.
The regulations are dealing with multichannel networks that operate globally. So, there is going to be a ripple effect once you have kind of mitigations that get taken into place.
What are tech giants doing to comply?
Many of tech companies have already outlined the ways in which they’re going to comply with the DSA. Here’s a brief overview of the most notable ones.
1. Google
While Google says it already complies with some of the policies envisioned by the DSA, including the ability to give YouTube creators a right to appeal video removals and restrictions, Google announced that it’s expanding its Ads Transparency Center to meet the requirements outlined by the legislation. The company also committed to expanding data access to researchers to provide more information about “how Google Search, YouTube, Google Maps, Google Play and Shopping work in practice.” It will also improve its transparency reporting and analyze potential “risks of illegal content dissemination, or risks to fundamental rights, public health or civic discourse.”
2. Meta
Meta, the parent company of Facebook and Instagram, is working to expand its Ad Library, which currently compiles the ads shown on its platforms. The company will soon start displaying and archiving all the ads that target users in the EU while also including the parameters used to target the ads, as well as who was served the ad.
In June, Meta released a lengthy report about how its algorithm works across Facebook and Instagram as part of its push toward transparency. It will also start allowing European users to view content chronologically on Reels, Stories, and Search on both Facebook and Instagram — without being subject to its personalization engine.
3. TikTok
Similar to the measures Meta is rolling out, TikTok has also announced that it’s making its algorithm optional for users in the EU. When the algorithm is disabled, users will see videos from “both the places where they live and around the world” in their For You and Live feeds instead of videos based on personal interests.
It will also enable users to view content chronologically on their Following and Friends feeds. TikTok is making some changes to its advertising policies as well. For European users aged 13 to 17, TikTok will stop showing personalized ads based on their activity in the app.
4. Snap
Snapchat will also give users in the EU the option to opt out of personalized feeds on its Discover and Spotlight pages and has also published reports on how it ranks the posts on these feeds. The company has committed to providing users with more information about why their posts or account has been removed and will give them the tools they need to appeal the decision.
In addition, Snapchat will no longer serve personalized ads to European Snapchat users aged 13 to 17. It will also create an archive of targeted advertisements it shows in the EU and will give European Snapchat users over the age of 18 more control over the ads they see.
Conclusion
With the implementation of DSA, the EU aims to change the oversight of large online platforms. Until now regulators have tried to fix problems – such as the spread of disinformation and violations of antitrust rules – after the effect. The new laws are meant to help them get ahead of the game by setting clear rules that online platforms must follow. The DSA, which will apply to all online businesses, but bigger services, defined as those with more than 45m users in the EU, binds platforms to share more information with regulators about how they moderate content, decide what users see and use artificial intelligence. They must allow vetted researchers and auditing firms to look at internal data to check if they are following the rules, too. Other changes will be more obvious. Platforms must now make it easy for users to report content they think is illegal, and will have to remove it quickly if it breaks the law. They must also tell users if their content is removed or hidden, and explain why. Targeted advertisements will no longer be allowed if they are based on sensitive personal data such as religion and sexual orientation. Using personal data to show ads to children and teenagers will also be banned.

The writer is an expert on International Law.

Muhammad Ali Asghar

This is the Admin of this website

Leave a Reply