Questions and answers about
the UK economy.

What are the social responsibilities of the dominant digital platforms?

Most digital platforms have emerged as big winners from the pandemic. Yet they are under regulatory pressure on both sides of the Atlantic. A fundamental question is whether and how these businesses should be held accountable for what happens on their websites.

In the spring of 2020, like millions of others, we wondered how we might cope with weeks of lockdown. Our experiences over the past 16 months would certainly have been very different and in many ways much harder if the internet had not existed. Platforms such as Netflix have offered us entertainment, while Facetime has allowed us to video-chat with loved ones. Other platforms have enabled many to work from home or contributed to keeping us safe via track and trace.

Despite the many benefits these digital platforms bring to our lives, they can also have negative effects, in particular on our mental health. This is hard to ignore when there are 3.2 billion people using social networks worldwide and the average user spent nearly two and a half hours a day on them in 2020.

As a result, the debate on how to regulate ‘Big Tech’ effectively has restarted. Regulatory initiatives that were slowed by the start of the pandemic are regaining momentum, in both Europe and the United States. 

The platform business model and addictiveness

An important consideration is the responsibility of platforms, both from a legal and a social perspective. In a post-Covid-19 context, this is particularly pertinent, as the various lockdowns and restrictions of the last year have increased the amount of time many spend online. OFCOM data reveal that in April 2020 – at the height of the first lockdown – UK adults were spending over four hours online and the use of digital platforms surged (by almost 2,000% in the case of Zoom).

First, platforms operate in the presence of ‘network effects’, whereby the benefit obtained from using a service increases when other users join it. In these markets, it is of primary importance to attract and retain users. Their presence represents an important barrier to entry for new (and perhaps more innovative) platforms, because people will be reluctant to move to a new platform if their friends and family are all using an existing one.

Second, the widespread use of the advertising-based business model might provide perverse incentives: platforms need to keep users ‘busy’ online as much as possible, to engage with their content, obtain impressions and boost the ‘clickthrough rate’ of advertising banners. 

A number of studies have explored this mechanism, highlighting how platforms can make a priority of keeping users’ attention rather than the quality of service they provide – and how they go about competing for attention (Ichihashi and Kim, 2021; Wu, 2016; and Prat and Valletti, 2021).

Given the limited attention of users, an example of how platforms keep them busy could be to give prominence on news feeds to catchy videos or ‘clickbait’ articles rather than op-eds on current issues. The former are more likely to go viral.

Platforms can choose the ‘addictiveness’ of service: a more addictive platform is not as good for users in terms of quality but manages to attract their attention. In this context, more competition can harm consumers, because addictiveness may substitute for quality. 

This is a particularly pressing issue because of the link between use of social media platforms and declines in mental health, particularly among adolescents. A survey by the Royal Society for Public Health reports that use of platforms such as Instagram, Facebook, Snapchat and Twitter increases feelings of loneliness, anxiety, depression and poor body image among 14-24 year olds.

Analysis shows that restricting consumers’ platform usage may reduce addictiveness and improve users’ experience (Ichihashi and Kim, 2021). In this context, some platforms may self-regulate, by offering individuals the choice to cap their usage. Examples of this include time limits on Facebook and Instagram, screen time settings on iPhones, and Google Digital Wellbeing on Android phones (Abrardi et al, 2021).

While this may seem a step in the right direction and a demonstration that platforms are finally taking responsibility, these voluntary caps may in fact prompt users to consent to more data disclosure, without anticipating the future consequences.

Further studies suggest that self-regulation is insufficient to tackle platform addiction (Scott-Morton et al, 2020). Neuroscientists have established that the neural pathways associated with addiction to substances like tobacco and alcohol are similar to those linked with behavioural addictions, including both gambling and social media use.

These platforms optimise the affective stimuli for higher levels of engagement. Consequently, the behavioural patterns in those who check social media with ever-increasing regularity – for ‘likes’ and other forms of interaction – are very similar to those seen in individuals suffering from other forms of addiction.

This arguably underscores the need for regulation, as other industries with dangers of addiction – such as tobacco and gambling – are already regulated.

Regulation is also desirable in the presence of ‘asymmetric information’, an economic term for when one party has more or better information than the other. In this case, social media platforms’ access to unlimited amounts of individual users’ data seems to call for regulation.

Again, there are existing examples, including age restrictions, protection by intermediaries (such as a physician), time and place restrictions, disclosure of risks and limits on advertising. It is likely that society will need new ideas as well as current methods to create regulations that make social media safe for everyone.

More generally though, platforms’ self-regulation may just be an attempt to guide unavoidable forms of regulation by policy-makers and to influence current discussions.

Platform liability: the economics and legal debate

While much of the debate about addictive platforms so far concerns their social responsibility, the regulatory debate on both sides of the Atlantic has dealt with the legal responsibility for what happens on their websites. Particular attention has been devoted to the issues of online security, illegal content and misconduct, and how to ensure that platforms have the incentives to take a more pro-active role on these issues.

In the United States, unlike publishers, platforms are protected by Section 230 of the Communications and Decency Act, which provides immunity from liability for providers and users of an ‘interactive computer service’ that publishes third-party content.

For example, in 2019, Facebook was found not liable for some violent attacks that were orchestrated by accounts linked to the terrorist group Hamas (Force v. Facebook, Inc., 934 F.3d 53 (2d Cir. 2019)). In this case, the US District Court for the Southern District of New York granted Facebook immunity because the latter did not develop the Hamas content itself and simply worked as an ‘interactive computer service’.

As online intermediaries might potentially have instruments to stop online misconduct and prevent harm – for example, due to the development of technology – it is widely recognised that such immunity should be subject to some limitations.

For example, in an important decision taken by US courts, it was established that Snapchat’s parent company can be sued over an app feature – a speed filter – that allegedly encouraged reckless driving. The parents of teenagers killed in a car accident claimed the youngsters believed – and that Snap knew they believed – that they would get a secret achievement for hitting a speed of 100 miles per hour. The court concluded that Snapchat was not protected by Section 230, but that the company ‘indisputably designed’ the reward system and speed filter, which allegedly created a defective product that encouraged dangerous behaviour.

This was a first, but not yet final, step towards a major responsibility for online intermediaries. In another example, in January 2021, Italy’s data privacy authority ordered the video-sharing app TikTok temporarily to block the accounts of users whose ages had not been confirmed. The decision came after the death of a 10-year-old girl in Sicily who died of asphyxiation while participating in a ‘blackout challenge’ shown by the app.

More generally, it is noted by many that platforms routinely engage in curation of their content and ‘news feed design’ for revenue optimisation. The question is therefore how to make sure that their curation effort is in line with what is socially desirable.

At the same time, it should be recognised that screening and monitoring user-generated content (as well as identifying illegal products and counterfeits on an e-commerce platform) might not be an easy task. Content moderation is a costly and imperfect activity, subject to errors and, in the case of borderline content, potentially conflicting with freedom of speech (Buiten et al, 2020).

The European Commission has taken formal steps by proposing the Digital Services Act (DSA), which is currently being discussed by the European Parliament. The DSA proposal maintains the current rule of liability exemption. This liability system dates back to the e-commerce directive of 2000, but unlike in the United States, online intermediaries can benefit from the exemption if considered as ‘passive’ and not aware of illegal conduct going on the platform.

With the new rules, more transparency will be provided, users will have the possibility to challenge platforms’ content moderation decisions, and very large online intermediaries will be subject to additional obligations and the need to provide a systemic risk assessment. These large platforms usually have a wide user base across member states that, in light of their crucial role as intermediaries, can cause harm to society.

The DSA also aims to improve transparency by introducing new obligations on platforms to disclose to regulators how their algorithms work, how decisions to remove content are taken, and the ways in which advertisers target users. Importantly, liability is enhanced for both the content that circulates online (for example, dangerous or harassing videos and posts, and fake news) but also for products being sold in marketplaces (such as counterfeit goods and illegal or hazardous products). These products represent quite a big problem for brand owners, as has been recognised recently by the OECD.

Introducing liability to platforms is not problem-free, as changing the platform’s incentives might ultimately backfire, leading to unintended effects. For example, online intermediaries might react to more obligations to moderate content and take down illicit products by passing these additional costs onto business users – through higher commission fees – or individual users – for example, with higher privacy costs/data collection (Lefouili and Madio, 2021).

Further, platforms adopting a ‘hybrid/dual’ role and competing with third parties might become more active in their ecosystem, increasing the number of products that they sell directly. For example, platforms such as Amazon, Apple and Google operate in a dual mode, as they run marketplaces in which they also sell their own products. Whether this active role of a platform is socially desirable is not immediately clear.

Where can I find out more?

Who are experts on this question?

Authors: Carlo Reggiani, Leonardo Madio and Andrea Mantovani
Photo by Adem AY on Unsplash
Recent Questions
View all articles
Do you have a question surrounding any of these topics? Or are you an economist and have an answer?
Ask a Question
OR
Submit Evidence