How Russian Twitter Bots Helped Normalize Fringe Theories About the Mail Bomber & What It Means for Businesses on Social Media

Last week's news of a terrifying campaign of letter bombs mailed to prominent Democratic politicians and supporters drew round-the-clock media coverage. Law enforcement focused, rightly so, on the details and clues in the "real" world. Meanwhile, at the first news of a bomb sent to the home of George Soros, our research team noted a sudden surge of activity in our proprietary bad actor database: Russian Twitter bots were being woken up and activated to wage an intense disinformation campaign. We published an exhaustive study into Russian bot behavior and tactics earlier this year, How Russian Twitter Bots Weaponize Social MediaHowever, the fast-moving and fast-evolving story around these mail bombs proved to be an opportunity to re-examine the anatomy of a disinformation campaign in real-time.

In this post, we'll offer an analysis of bot activity around the mail bomb campaign (which we now believe to have been orchestrated allegedly by Cesar Sayoc, of Florida). We dug into our bad actor database, which has grown since our previous report, from 320,000 to over 500,000 confirmed bots. At the end, we will offer the key lessons for companies looking to get proactive about cybersecurity and brand reputation protection.

The Bots Awaken

In our previous report, we mentioned that groups of bots go dormant for long periods of time. Think of them as battalions given leave. Each group has their own area of "interest," and during down periods they keep up minimal activity to pose as "real" individuals. In the case of the fast-breaking news story around the letter bombs, we saw how quickly the bots were called up. Further, we noted the volume of output required to hijack a news story, or exert influence over it. From October 23-26, Russian Twitter bots acting on the letter bombs sent 2,021 tweets.

SafeGuard_Cyber-Mail-bomber_disiniformation-Tweets_vs_Search

Unlike previous campaigns, in which bot activity parallels and tries to overwhelm search trends, here we saw the bots moved rapidly before the story really started to take hold in the public consciousness. From this activity, we can surmise that bot operators saw something of a golden opportunity (however unfortunate) to advance their agenda. In our previous report, we noted that new hijacking is an intense undertaking and cannot be sustained for long. However, when done correctly as with this campaign a great deal of initial effort can yield the desired results. Looking at the red line above, the overwhelming volume of bot tweets is designed to seed several lines of conversation and actively shape Twitter users' perception of breaking news. Bot activity tapers as search queries increase, because the mission has largely been accomplished: content has been disseminated at scale, and now bot operators need only sit back and let other bots and real users share that content. 

Conversation is the Weapon

Similar to our first report, we wanted to understand how bots talked about this topic. It wasn't enough to see that the bad actors engaged with it, we wanted to categorize the content being disseminated. To that end, we used natural -language processing to categorize the bots' tweets according to tone and conversational functions.

SafeGuard_Cyber-Mail-bomber_disiniformation-content-categories

Here we see that the majority of tweets were focused on mockery and ridicule, likely to delegitimize the alarm, or simply cast doubt on the mainstream media's account. Disinformation doesn't require providing a viable counter-story, only sowing enough doubt about the orthodoxy to damage its credibility. "False Flag" and similar conspiracy theories took second place last week, though more attention has been paid to them in the aftermath of Sayoc's arrest.

Perhaps the bot operators thought mockery was more believable than applying too much pressure through conspiracy theories? The psychology and decision-making process is more difficult to assess. Yet, it's clear that Psychological Operations or PSYOPs professionals convey selected information and indicators to audiences to influence their emotions, motives, objective reasoning, and ultimately their behavior. As scrutiny of social media has increased and awareness of bot operations has grown, it's reasonable to assume that bot operators will adjust their tactics.

Punch the accelerator on fringe theories, and you'll alienate the intended influence target: the reasonable middle. In this way, bot operators will likely always re-tool tactics to hew as close to the user behavior and linguistic patterns of the audience targeted for influence.

Quality vs. Quantity of Disinformation

But false flag conspiracy theories did play a role in confusing the public and sowing division. If Russian bots were throttling back on fringe notions in favor of ridicule, how did these theories make the leap into the mainstream, being shared by prominent conservative pundits? The answer is twofold: strategic connections and measured influence.

In 2016, bot operators leveraged social media's massive scale. In fact, operators employed US-based start-up technologies to run roughshod through social media: launching fusillades across multiple social media channels.

Two years later, Russian bot operations are both more sophisticated and more efficient. Bots need only follow and influence the right people, and overwhelming volume isn't required. The bots engaging with the letter bombs sent 2,021 tweets, a seemingly low number given the goal of influencing public opinion and the fleeting nature of tweets. (It's worth reiterating, per our previous report, that bots disseminate and share content designed for both liberal and conservative audiences.) When we step back to analyze those users either following bots (unknowingly or otherwise) or sharing the content within this time frame, the potential reach of these tweets grows exponentially to more than 10 million users. In the chart below, the size of each square is commensurate with that user's follower count. It's clear that a successful disinformation campaign relies on content cascading through both highly influential users and those with average followings and into disinformation sources.

SafeGuard_Cyber-Mail-bomber_disiniformation-total-Twitter-followers

In this way, bot operators were able to rely on putting fringe content in front of the right users, and then allowing the normal sharing and retweet behavior of social media to naturally inject conspiracy content into the conversation.

 SafeGuard_Cyber-Mail-bomber_disiniformation-conspiracy-content_700px

This content gets amplified by bots designed to look like more mainstream individuals or entities, until finally breaking through to influential real users (via NBC, emphasis added):

Those theories also propagated throughout Twitter, where Donald Trump Jr. liked a tweet that claimed "fake bombs made to scare and pick up blue sympathy vote.”

The tweet, posted by the account @USANews007, included the hashtags #FakeBombHoax and #VoteRed. Trump Jr. liked the tweet at around 4:50 p.m. ET Thursday.

The user, @USANews007, noted above and in the chart as "USA NEWS", is a confirmed bot in our database, and is linked to a front "news" website, much like the Iranian-backed sites identified by FireEye earlier this year to peddle stories friendly to Iranian government interests. The urge to share like-minded media, actualized by social networks, has again been weaponized for disinformation and influence. (NB: As of time of writing, the @USANews007 account had been suspended by Twitter.)

What This Means for Companies Using Social Media

The disinformation campaign around the recent mail bomber is startling and terrifying. It's easy to write it off as a political story and move on with business. However, as we've pointed out, bot operations pose frightening implications for private enterprise. Nor are bots solely the remit of state-sponsored operators. The same technology and approaches are employed by criminal networks seeking financial gain. We have already seen bots deployed in the following ways:

  • To game social network algorithms in black hat SEO campaigns to surface brand impersonation accounts or sites
  • To mobilize against companies, attacking brand reputations or casting doubt about product quality
  • To initiate phishing or spear-phishing attacks on VIPs and mid-level personnel to gain access to network infrastructure

Businesses must take brand protection into their own hands. They cannot and should not rely on platforms to protect brand reputation. How Facebook polices its network ultimately has no impact on Twitter. Coordination and a unified defense are paramount when modern cyber threats are cross-channel. What's more, if Twitter can't act against real individuals like Cesar Sayoc quickly enough, how will it possibly contend with manufactured public opinion or bots designed to look like real people? Compounding the problem is the fact that typical cyber defenses are designed for the perimeter, meaning most CISOs and CMOs have no visibility into bots or other threats outside that perimeter. Do you know if your brand accounts are being followed by bots? Do you know how many of your key stakeholders are following bots themselves?

Social media is here to stay, and the world will continue to grow more connected. Protect your company's investment in digital technology. Contact us today for free risk assessment and start building a proactive defense today.

 

About the author
Otavio Freire

Otavio Freire

As the President, CTO and Co-Founder of SafeGuard Cyber, Mr. Freire is responsible for overseeing the development and innovation within SafeGuard’s enterprise platform. He guides efforts that enable customers to significantly impact their sales, marketing and enterprise efforts via better cyber protection in social and digital channels.

Subscribe to our blog for the latest in digital risk news, security tips, and business transformation

Contact SafeGuard Cyber