More doubts for Facebook on polarization
Study we published used data from the "break glass" period after 2020 election
A year ago, Science published a package of papers on a massive set of data on Facebook and Instagram use during and following the 2020 election. The main experiment involved comparing an experimental group that received a reverse chronological feed of posts versus a control group that was fed the posts determined by the algorithm used on the site at the time (from here on, the default algorithm). The default algorithm is designed to keep people on the site, is continuously tweaked by Facebook, and has been the subject of widespread criticism for fueling political polarization in the US and overseas.
To address this criticism, Facebook teamed up with a distinguished group of academic authors in a very carefully structured partnership that had many protections to ensure the unfettered analysis of the data. The project included an “independent rapporteur,” Mike Wagner, who observed the partnership extensively. Dr. Wagner described his role in a very interesting Policy Forum that was published with the papers. The piece was titled “independence by permission,” because while Dr. Wagner concluded that the researchers had independence in analyzing the data, they were only analyzing data that Facebook permitted them to examine. The papers concluded that the default algorithm did not contribute to polarization as extensively as had been assumed. In particular, one paper supported the idea that the default algorithm did not expose users to more untrustworthy news than the reverse chronological feed.
In our cover illustration (below) and in my editorial in the package, we suggested that these findings obtained because users were already so segregated by ideology on the site that the default algorithm could not cause further polarization. Facebook took issue with this, and instead made strong statements that the study showed that the default algorithm did not contribute to polarization. In a well-reported story in the Wall Street Journal, Facebook VP Nick Clegg reiterated his self-congratulatory claims, which we disputed as overstating the evidence — and so did the academic authors of the study.
A few months ago, Science received a letter pointing out that in addition to the caveats already in the study, the experiment was conducted during a period when Facebook had implemented emergency measures (they called them “break glass measures”) in the wake of the 2020 election that may have made the default algorithm similar to the reverse chronological algorithm. This could explain why less untrustworthy news was seen in the default algorithm than expected. When we received the letter, we knew that we didn’t have time to review and publish a separate paper about this and that we had an obligation to make sure that people who came to the paper would know about these concerns. We wanted to find a way to do this before the 2024 election since it was possible that if there was another contested election, the role of social media would likely be discussed again; we did not want the paper cited imprecisely. In their response letter, the authors pointed out — and we agreed — that the paper was written with a sufficient number of caveats suggesting that these algorithmic fluctuations could be a factor and therefore did not warrant a correction.
After considering all of these elements, Executive Editor Valda Vinson and I decided that the best way to handle this matter was to post the letter from the critics and the response from the authors as eLetters on the paper, which is where we generally post these kinds of exchanges. But we wanted to make sure that everyone who came to the paper knew about these considerations when it was cited. And since the way the paper was written did not require a correction to anything in the paper, we decided that the two of us would write an editorial laying out these considerations that would be posted as a related document at the top of the paper. This way, anyone who comes to the paper would see this discussion at the very top. Valda and I are very grateful for the support we get from our CEO and board that allows us to take these measures in the interest of the scientific community. One university press release from the critics described this as “debunking” the paper. We don’t agree with that (spoiler alert: university press releases are sometimes exaggerated, and I ought to know having run two universities). If the paper were debunked, we would correct it. We just believe that everyone who reads and cites this paper should have the context, and given the time constraints, we feel we have accomplished that.
Facebook has contested the idea that these considerations weaken their claims that the algorithm does not fuel polarization. Mike Wagner told me that he did not recall discussions about the emergency measures when the authors and Facebook were meeting. When I asked Facebook if they should have done more to make sure that the measures were considered, they correctly pointed out that it was known publicly before the papers were published that measures had been implemented, which was apparently sufficient to them. They further said that they stood behind Nick Clegg’s original statements absolving Facebook from contributing to political polarization.
These matters have garnered extensive news coverage in the last couple of days, most prominently in our own news section (which is editorially independent from research) and by Jeff Horwitz who has been covering this for the Wall Street Journal. I’m quoted there as saying that we stand behind our disagreement with Facebook’s framing, which we disputed even before these latest discussions.
As we careen into another divisive election, precise and rigorous information on the role of social media on democracy is crucial. We will continue to document such studies as they become available.