The debate around misinformation on social platforms, and how it should be policed, is incredibly complex, with no blanket solutions. Removing clearly false reports seems like the most logical and effective step – but that’s not always so clear cut, and leaning too far the other way, and removing too much, can limit free speech and valuable debate. 

Either approach has dangers, and today, YouTube’s Chief Product Officer Neal Mohan has provided his perspective on the issue, and how YouTube is looking to balance its approach to misinformation with the need to facilitate an open platform for all users.

First off, in tackling medical misinformation specifically, the key topic of the moment, Mohan notes that YouTube has removed over a million videos related to coronavirus information since February 2020, including those promoting false cures or claims that the pandemic is a hoax.

“In the midst of a global pandemic, everyone should be armed with absolutely the best information available to keep themselves and their families safe.”

That said, YouTube has facilitated the spread of a significant amount of COVID misinformation. Last May, for example, a controversial anti-vax video called ‘Plandemic’ was viewed over 7 million times on YouTube before it was removed.

The challenge for YouTube in this respect, as it is with Facebook, is scale – with so many people active on the platform, all of the time, it’s difficult for YouTube to act swiftly enough to catch everything in a timely manner, and even a small delay in enforcement can lead to millions more views, and a much bigger impact.

On this, Mohan notes that of the 10 million videos the platform removes for Guideline violations every quarter, the majority don’t even reach 10 views. But again, that’s averages, and there will be cases like ‘Plandemic’ which slip through the cracks, something Mohan also acknowledges.

“Speedy removals will always be important but we know they’re not nearly enough. Instead, it’s how we also treat all the content we’re leaving up on YouTube that gives us the best path forward.” 

On this front, Mohan says that another element of YouTube’s approach is ensuring that information from trusted sources gets priority in the app’s search and discovery elements, while it subsequently seeks to reduce the reach of less reputable providers.

“When people now search for news or information, they get results optimized for quality, not for how sensational the content might be.” 

Which is the right way to go – optimizing for engagement seems like a path to danger in this respect. But then again, the modern media landscape can also cloud this, with publications essentially incentivized to publish more divisive, emotion-charged content in order to drive more clicks. 

We saw this earlier in the week, when Facebook’s data revealed that this post, from The Chicago Tribune, had gleaned 54 million views from Facebook engagement alone in Q1 this year.

Chicago Tribune story

The headline is misleading – the doctor was eventually found to have died from causes unrelated to the vaccine. But you can imagine how this would have fueled anti-vax groups across The Social Network – and some, in response, have said that the fault in this instance was not Facebook’s systems, which facilitated the amplification of the post, but The Chicago Tribune itself for publishing a clearly misleading headline.

Which is true, but at the same time, all publications know what drives Facebook engagement – and this case proves it. If you want to maximize Facebook reach, and referral traffic, emotional, divisive headlines that prompt engagement, in the form of likes, shares and comments, work best. The Tribune got 54 million views from a single article, which underlines a major flaw in the incentive system for media outlets.

It also highlights the fact that even ‘reputable’ outlets can publish misinformation, and content that fuels dangerous movements – so even with YouTube’s focus on sharing content from trusted sources, that’s not always going to be a solution to such problems, as such. 

Which Mohan further notes:  

“In many cases, misinformation is not clear-cut. By nature, it evolves constantly and often lacks a primary source to tell us exactly who’s right. Like in the aftermath of an attack, conflicting information can come from all different directions. Crowdsourced tips have even identified the wrong culprit or victims, to devastating effect. In the absence of certainty, should tech companies decide when and where to set boundaries in the murky territory of misinformation? My strong conviction is no.”

You can see, then, why Mohan is hesitant to push for more removals, a solution often pressed by outside analysts, while Mohan also points to the growing interference of oppressive regimes seeking to quash opposing views through censorship of online discussion.  

“We’re seeing disturbing new momentum around governments ordering the takedown of content for political purposes. And I personally believe we’re better off as a society when we can have an open debate. One person’s misinfo is often another person’s deeply held belief, including perspectives that are provocative, potentially offensive, or even in some cases, include information that may not pass a fact checker’s scrutiny.”

Again, the answers are not clear, and for platforms with the reach of YouTube or Facebook, this is a significant element that requires investigation, and action where possible.

But it won’t solve everything. Sometimes, YouTube will leave things up that should be removed, leading to more potential issues in exposure and amplification, while other times it will remove content that many believe should have been left. Mohan doesn’t deny this, nor shirk responsibility for such, and it’s interesting to note the nuance factored into this debate when trying to determine the best way forward.

There are cases where things are clear cut – under the advice of official medical bodies, for example, COVID-19 misinformation should be removed. But that’s not always how it works. In fact, more often than note, judgment calls are being made on a platform-by-platform basis, when they likely shouldn’t be. The optimal solution, then, could be a broader, independent oversight group making calls on such in real-time, and guiding each platform on their approach.

But even that could be subject to abuse.

As noted, there are no easy answers, but it is interesting to see YouTube’s perspective on the evolving debate. 

Original Source