A piece in today’s Washington Post alleges that it is not in Facebook’s financial interests to discourage extremism on our platform because research shows that outrage is what makes us more money. The research they cite did not even look at extremism. The opinion editorial is simply wrong.
Polarizing and extremist content is not just bad for society, it’s also bad for our business. Our business only works when people choose to use our apps because they have a positive impact on their lives – and advertisers choose to run ads that are relevant to the folks that they are trying to reach. Polarizing and extremist content drives both of them away. That’s part of the reason why we have invested in technology to find and remove hate speech and other forms of extremism that violate our rules quickly and have also built a global team of more than 35,000 people to help keep our services safe for everyone who uses them.
The research cited in the Post uses data that only reflects a specific period of time, the months leading up to last year’s US elections. That was a time when the US was experiencing historically high levels of polarization and it’s unclear whether those results would translate into other periods of time and other nations. It’s also important to note that political content is only a narrow slice of all social media content – representing just 6% of what people saw on our services during the height of last year’s election cycle. It’s reasonable to assume that number is even lower today.
The piece also paints an overly simplistic – and limited – picture of what a substantial amount of research into polarization and the role that Twitter and Facebook play in driving it actually shows so far. For example, research from Stanford University in 2020 showed that in some countries polarization was on the rise before Facebook even existed and in others it has been decreasing while internet and Facebook use increased.
Research published this year by the US National Bureau of Economic Research found that the best explanation for levels of polarization across nine countries studied were the specific conditions in each country, as opposed to general trends like the rise of internet use. A 2017 study published in the US Proceedings of the National Academy of Sciences found that polarization in the United States has increased the most among the demographic groups least likely to use the internet and social media. And data published in 2019 from the EU suggests that whether you get your news from social media or elsewhere, levels of ideological polarization are similar. One recent paper even showed that stopping social media use actually increased polarization.
However, none of these studies provide a definitive answer to the question of what role social media plays in driving polarization. The questions of what drives polarization in our society – and what are the best ways to reduce it – are complex. Much more research is clearly needed. That’s why we have not only commissioned our own research into this topic but have asked respected academics, including some of our critics, to conduct their own research independent from us.
For example, we have undertaken a new research partnership with external academics to better understand the impact of Facebook and Instagram on key political attitudes and behaviors during the US 2020 elections, building on an initiative we launched in 2018. It will examine the impact of how people interact with our products, including content shared in News Feed and across Instagram, and the role of features like content ranking systems. Matthew Gentzkow, who previously authored a study on how Facebook increased affective polarization, is one of the collaborators.
But there is another important point that is missing from the analysis in the Washington Post. That is the fact that all social media platforms, including but not limited to ours, reflect what is happening in society and what’s on people’s minds at any given moment. This includes the good, the bad, and the ugly. For example, in the weeks leading up to the World Cup, posts about soccer will naturally increase – not because we have programmed our algorithms to show people content about soccer but because that’s what people are thinking about. And just like politics, soccer strikes a deep emotional chord with people. How they react – the good, the bad, and the ugly – will be reflected on social media.
It is helpful to see Facebook’s role in the 2020 elections through a similar lens. Last year’s elections were perhaps the most emotional and contested in American history. Politics was everywhere in our society last year – in bars and cafes (at least before the pandemic lockdowns), on cable news, at family gatherings, and yes on social media too. And of course some of those discussions were emotional and polarizing because our politics is emotional and polarizing. It would be strange if some of that wasn’t reflected on social media.
But we also need to be very clear that extremist content is not in fact fundamental to our business model. It is counterproductive to it, as last year’s Stop Hate for Profit advertising boycott showed. What drives polarization deserves a deeper examination. That’s exactly why we are working with the world’s most esteemed academics to study this issue seriously so we can take the right steps to address it.
The post Extremism is bad for our business and what we are doing about it appeared first on Facebook Research.