When President Donald Trump tweeted about a caravan of immigrants heading to the US border in late October, it set off a wildfire of misinformation on social media. Posts on Facebook and Twitter spread conspiracy theories that Democratic donor George Soros was funding the migrants and the false allegation that the group included terrorists and gang members.
It turns out it wasn’t just Republicans latching on the story—it was also Twitter bots. Mother Jones partnered with RoBhat Labs, a non-partisan social media firm that reports bot activity, to show the scope of disinformation circulating on Twitter before the election. The following data was collected over a course of 24 hours between November 4th to November 5th:
In order to detect automated, bot-like behavior, RoBhat collects sample tweets from Twitter’s application programming interface and runs them through a machine learning model. The model looks for red flags indicating non-human activity, such as a high posting frequency. The tool used by RoBhat, FactCheck.Me, has an approximately 1-2 percent false positive rate according to the company.
While Bot-like behavior can manipulate and distort otherwise authentic conversation on Twitter, it does not necessarily mean that accounts are connected to a political influence campaign or foreign operation. In many cases, the content amplified by bots can come from mainstream news sources or was first shared by public figures. Tweets can also often mention multiple topics — for instance, many of the tweets mentioning Beto O’Rourke and Sen. Ted Cruz also mentioned the opposing candidate.
Days before the midterm elections, Twitter is still scrambling to cut down on platform manipulation. Last week, the company had to apologize after “Kill All Jews” showed up as a trending topic in New York. And on Friday, Reuters reported that the platform had removed more the 10,000 accounts posting automated messaging that discouraged voting and posed as Democrats. The accounts were flagged by the Democratic National Committee, which used consulting groups, including RoBhat, to uncover the accounts.
Friday’s removal wasn’t the first leading up to the election. Since May, Twitter has purged more than 70 million accounts, including 50 accounts purporting to represent state political parties and hundreds associated with an Iranian influence operation. In October, Facebook also purged nearly 600 pages that appeared to be associated with an Iranian influence operation. Spamming accounts violates the terms of service of both Twitter and Facebook, though both platforms have struggled to reign in the behavior.
Ash Bhat, the CEO of RoBhat, says that his company has reached out to Twitter multiple times over the past year, but has not received a response. “We believe it’s important that we work together and can only do so much if they don’t communicate,” says Bhat.
When asked for comment, a Twitter spokesperson offered a link to a thread by Yoel Roth, the company’s head of site integrity, and would not further elaborate on what it is doing to remove bot accounts leading up to the election.
Update Tuesday, November 6, 9:40 am: Following publication, Twitter sent a response disputing the number of bots on the website. “This research uses our public API, which does not take into account any of the preemptive work we do to stop automated activity across the service,” a Twitter spokesperson said. “On average we challenge 10 million accounts per week. While we do they are not visible anywhere, including search, trends, and replies.”