Demystifying & Depolarizing: Understanding Social Media Election Interference from Authoritarian Regimes

What’s the issue?

Words and phrases like “election integrity,” “interference,” and “misinformation” have been trending over the last several years. But so have words like “fake news” and “hoax.” So, what’s the truth? With the growth of social media, media globalization, and cultural imperialism­– or states imposing their views on other states via media (Croteau and Hoynes, 2019)– bad actors are finding new and innovative ways to sabotage others. Authoritarian regimes have taken steps to interfere in recent US elections and many domestic actors have created an environment, through being either uniformed or complicit by not taking the threat seriously enough, that makes interference easier. In the coming years, this issue will become increasingly important– since the 2020 election, states such as China, Iran, Cuba, Venezuela, and others have taken cues from Russia’s playbook to influence US voters (Brandt and Hanlon, 2020). The private sector, public sector, and media need to invest in informing the public about authoritarian interference via social media to lessen the partisan divide on the issue and ultimately uphold the integrity of US elections, which are the most important part of US democracy.

Who does it?

While many states engage in this activity, Russia is the biggest player perpetrating these cyber incidents. There are three different categories of involvement: state-directed, where state officials sanction the actions; state-encouraged, where state officials do not explicitly order the action, but the perpetrators are almost guaranteed government support; and state-aligned, where the perpetrator acts to support state objectives (Galante and Ee, 2018).

There are four main actors: Advanced Persistent Threat (ATP) 28 and ATP 29, hacking groups sponsored by the Russian government; the Internet Research Agency (IRA), a Russian organization that spreads divisive and incorrect information through social media accounts; and CyberBerkut, a primarily Ukrainian hacking group that is considered to be linked to ATP28, despite claiming to be a domestic opposition group (Galante and Ee, 2018).

How do they do it?

There are six types of interference actions: infrastructure exploitation, vote manipulation, strategic publication, false-front engagement, sentiment amplification, and fabricated content. This article will focus on the latter three. False-front engagement is essentially creating a fake identity to interact with other users, sentiment amplification is spreading certain viewpoints and can be done overtly or covertly, and fabricated content is spreading false information. (Galante and Ee, 2018). Because of the design of social media apps, these malicious actions can spread quickly. The collection and use of data about behavior of individual users by these apps makes it easier for foreign actors to micro-target individuals with political advertisements (Bradshaw, 2020).

The Information Operation Archive (IOA), developed by the German Marshall Fund’s Alliance for Securing Democracy and Graphika, is an archive of over ten million social media posts on Twitter and Reddit made by Russian and Iranian state-sponsored operations. A search of the term “votes,” generated 4,223 results from over 489 unique users from the Russian IRA in the October 2018 dataset alone. Here are three examples of false-front engagement, sentiment amplification, and fabricated content (Information Operation Archive):

Some of the accounts used by the IRA are randomized numbers, but some appear to be the names of real people or news organizations, making them even more dangerous to public opinion. While it is impossible to know how many people viewed each tweet, some of them receive a large amount of engagement through retweets and likes.

What should we do to stop it?

There have been several efforts made on in both the private and public sector to attempt to prevent authoritarian interference on social media and combat its effects. While these efforts are important, they will only work if the public and elected officials believe in the severity and urgency of the issue.

For journalists and news organizations, I recommend utilizing resources like the IOA to give concrete examples of interference to demystify and inform the public of the realities of foreign interference on social media.

For Congress and the Executive Branch, I recommend working to lessen the partisan divide on the issue, which will trickle down to depolarize and depoliticize public opinion. Upholding the integrity of US elections should be a priority on both sides of the aisle.

For social media companies, I recommend that once a post or account is flagged for potential misinformation or foreign interference, users who viewed and/or interacted with the content should be notified and provided independent fact-checking material. This will help repair the effects of social media interference on an individual level.


Bradshaw, S., (2020). Influence operations and disinformation on social media (Modern Conflict and Artificial Intelligence, pp. 41–47). Centre for International Governance Innovation.

Brandt, J., & Hanlon, B. (2021). Defending 2020: What worked, what didn’t, and what’s next.

Croteau, D., & Hoynes, W. (2019). Media/society: Technology, industries, content, and users (Sixth edition). SAGE.

Galante, L., & Ee, S. (2018). Defining Russian election interference: An analysis of select 2014 to 2018 cyber enabled incidents. Atlantic Council.

IOA — Information operations archive. (n.d.). Retrieved May 8, 2021, from