In the days before and after last year’s presidential election, Twitter attempted to limit the spread of election misinformation posted by former President Donald Trump by using warning labels and blocking engagement with the tweets. But a new study shows those posts actually spread further and longer than ones that were not flagged.
What You Need To Know
- A study found that tweets about the 2020 presidential election by Donald Trump that were labeled by Twitter as disputed or potentially misleading spread further and longer than ones that were not flagged
- The study was conducted by New York University’s Center for Social Media and Politics and its findings were published this week in the Harvard Kennedy School Misinformation Review
- Zeve Sanderson, co-author of the paper and executive director of Center for Social Media and Politics, said it’s not clear whether Twitter’s warning labels worked or not
- A Twitter spokesperson insisted the company’s actions had an impact on the spread of election misinformation
Researchers with New York University’s Center for Social Media and Politics found that Trump’s messages that were labeled by Twitter as disputed and potentially misleading — what the paper's authors referred to as “soft intervention” — spread further than tweets that received no intervention at all.
Tweets that received “hard intervention” — they were labeled and blocked from engagement with users, including retweets — had a wider reach on Facebook, Instagram and Reddit than those that received either soft or no intervention, the study found. The posts would resurface on other social media platforms in the form of links, quotes or screenshots. On average, tweets that received the more severe action were posted on pages with over five times as many as followers as messages that received soft or no intervention.
Zeve Sanderson, co-author of the paper and executive director of Center for Social Media and Politics, said it’s not clear whether Twitter’s warning labels worked or not.
“It’s possible Twitter intervened on posts that were more likely to spread, or it’s possible Twitter’s interventions caused a backlash and increased their spread,” he said.
Despite the tweets’ broad exposure, previous research suggests the labels could lower people’s trust in the false content, something the NYU researchers noted in their paper.
The NYU researchers analyzed 1,149 political tweets from Trump from Nov. 1, 2020, to Jan. 8, 2021 — 303 received soft inventions, 16 hard intervention and 830 no intervention at all. They identified the same messages on Instagram, Facebook and Reddit and collected data from those platforms, which have varying content moderation rules. The study’s findings were published this week in the Harvard Kennedy School Misinformation Review.
“The findings underscore how intervening on one platform has limited impact when content can easily spread on others,” said co-author and research scientist Megan A. Brown. “To more effectively counteract misinformation on social media, it’s important for both technologists and public officials to consider broader content moderation policies that can work across social platforms rather than singular platforms.”
A Twitter spokesperson insisted the company’s actions had an impact on the spread of election misinformation. The platform labeled about 300,000 disputed or possibly misleading tweets from Oct. 27 to Nov. 11 last year, 74% of which were viewed after the warning was added, the company said. Additionally, a prompt that warned users before sharing such tweets resulted in a 29% reduction in “quote tweets,” the spokesperson said. And Twitter placed messages on people’s feeds that aimed to preemptively debunk false claims about election results and mail-in voting.
“The challenges of misinformation continue to be complex, and require a whole-of-society approach,” the spokesperson said in a email to Spectrum News. “We continue to research, question, and alter features that could incentivize or encourage behaviors on Twitter that negatively affect the health of the conversation online or could lead to offline harm.”
Twitter permanently banned Trump on Jan. 8, two days after a mob loyal to the then-president stormed the U.S. Capitol. The social media giant cited “the risk of further incitement of violence,” saying that the president’s tweets after the riot were being interpreted by some of his supporters as calls for “to replicate the violent acts that took place on January 6, 2021.”
Trump had used Twitter to spread false information about widespread election fraud leading up to the insurrection.
Facebook, Instagram and YouTube also suspended Trump-related accounts.