Skip to main content
Legal Advice Centre

Taking on Misinformation: Facebook’s Ban on Anti-Vaccine Ads

On 13 October 2020, Facebook announced that it would ban advertisements discouraging vaccines or suggesting their inefficacy. The step comes amid the Covid-19 pandemic, during which misinformation on social media surrounding the virus has prospered.

Published:
Facebook login page on a logo

Photograph: Kon Karampelas on Unsplash

Misinformation on the topic has ranged from a conspiracy theory that the virus was created in a Chinese lab, to independent distributors selling a “miracle mineral solution” as a cure to the virus online containing sodium chlorite, which the United States Food and Drug Administration warned to be potentially life threatening and discourages against purchasing. The ban against such advertisements also came just months before Donald Trump was banned from Facebook, Twitter, and Google. The latter ban can similarly be seen as a means through which social media platforms have attempted to limit misinformation (although Twitter cited the potential incitement of violence as the reason behind its ban of Trump’s account).

The logic behind the ban

The World Health Organization (WHO) describes the state of affairs as an “infodemic,” highlighting that the overload of misinformation on Covid-19 hinders the public health response to the virus. Misinformation is sometimes deliberately put out to advance particular agendas. By working against the authorities’ response to the virus, this causes significant harm to public health. This was demonstrated in Iran, where misinformation proved fatal when hundreds of citizens died after ingesting methanol alcohol, which social media had suggested as a cure for Covid-19.

Logically then, the reasoning behind Facebook’s ban comes from the idea that ads discouraging vaccines or claiming them to be dangerous are at risk of harming individuals. Indeed, the company has a particular guideline of sometimes removing misinformation that contributes to physical harm. Additionally, WHO member states passed a resolution in May that calls on member states and international organisations to combat misinformation regarding the virus, including online media. The new ban therefore works within these guidelines.

What are the details?

This move is significant given Facebook’s historic reluctance to limit speech on its platform. For example, only in October 2020 did the company ban holocaust denial on the site. Importantly, Facebook has previously taken steps to limit misinformation in ads on its website. Its advertisement policy “prohibits ads that include claims debunked by third-party fact-checkers or, in certain circumstances, claims debunked by organisations with particular expertise.” This means that previously, ads, which spread hoaxes that the WHO confirmed as false, have been rejected. The 13th October ban takes this further by prohibiting ads that discourage vaccinations, not just those spreading confirmed hoaxes.

Crucially, the new rule applies only to ads discouraging vaccinations, and not user-generated posts, which make up a significant amount of misinformation regarding Covid-19. The ban goes to the heart of the debate between free speech, a crucial liberal democratic principle, and preventing harm by limiting speech. In this particular case, the latter principle has justifiably outweighed the former.

How is speech on Facebook regulated?

The laws regulating speech on social media platforms, such as Facebook, are not easy to pin down, given the platform is accessible in most countries. At a basic level, the key to speech regulation is found in Facebook’s Terms Of Service, which differ depending where in the world the user is logging in from. In other words, domestic laws can regulate Facebook, as it is providing a service in a given territory or jurisdiction. This means that there may be different terms in different countries to accommodate state or regional laws, as is the case with, for example, the EU General Data Protection Regulation, which Facebook is not legally obliged to adhere to in the United States. On the flip side, when Facebook changes its own policies, rather than being directed to do so by a state authority, those policies effectively become part of the contract between Facebook and all of its users around the world, updated in its Terms of Service.

The UK does not have particular laws regarding misinformation on social media that are written into Facebook’s Terms Of Service when logging in from the UK. However, the government’s Rapid Response Unit was created in 2018 in part to monitor misinformation online. In the context of Covid-19, this means that the unit identifies misinformation on social media and coordinates a response with other government departments to combat that specific false narrative. Such a response could include working with social media platforms such as Facebook to remove the content or debunk the false narrative, which is likely a lengthy process.

Regardless of jurisdictional differences in speech regulation, the move by Facebook should be welcomed. Harmful speech on social media is not limited to right-wing extremists inciting hate speech, but rather operates within a larger context, including that of public health. The ban on ads discouraging vaccines, along with the banning of Donald Trump on a number of social media platforms, demonstrate the importance of these sites of governing themselves, particularly given their overall contribution to the public good.

Sources:

By Caroline White, Law Student on the Senior Status Programme at Queen Mary University of London

 

 

Back to top