top of page
  • Lauren Salim

Weaponized Disinformation: A global threat to democracies

Author: Lauren Salim

March 28, 2023

Around the world, we are seeing an increase in the coordinated use of information, often false or taken wildly out of context, to sow discord within and between nations. Both state and non-state actors have become increasingly adept at “weaponizing” information in the lead-up to election cycles.

This interference is significant because the inability to access trustworthy, relevant, and timely information during election cycles undermines democracy and discourages civic participation and can result in people losing faith in the institutions designed to serve them.

This is a widespread issue. Freedom House recently examined 40 elections and referendums held between June 2018 and May 2020 and found that 88% of these contests were marred by digital election interference.

Impact of the internet on elections

It’s fairly evident that many of us rely on the internet and social media for news. Social media, in a very real sense, is the new public square. It’s important to acknowledge then that the internet and social media platforms are where a large amount of political discourse is taking place by voters.

Digital platforms are the new battlefield for democracy. It’s seemingly unavoidable that in order to obtain power, one must understand and contend with the nuanced ways information is flowing on the internet in order to gain and maintain legitimacy.

Election interference is not a new phenomenon, but its effects are amplified by the reach and access afforded by the internet and social media platforms. Furthermore, the internet provides a relatively low-cost way to reach many people, making it even more accessible for bad actors to spread disinformation.

Another consideration is the huge amount of data collected by internet users by websites. Voters are often unaware of what kinds of data are collected by them, which also means they have little control over how they have been profiled by social media companies. Microtargeting technologies can use this data to provide political actors and other agents with fairly detailed information about voters that can be broken down by race, political affiliation, age, religion, and more, which allows them to customize advertisements to draw people in.

Bots and other inauthentic behaviour also have a significant impact on election disinformation. Bots are automated programs used to engage in social media. Essentially, they are an algorithm pretending to be human, except they can engage with content and circulate it at a much wider scale. Bad actors can use bots fairly easily to control the narrative, attack other social media users, and generally create chaos online.

Due to these factors, the internet and social media platforms really amplify all the ways actors can spread information and unfortunately disinformation online.

Motivations & Tactics for Election Interference

There are a range of motivations for interfering in elections. It may be a more authoritarian regime trying to maintain power, or an opposition group trying to gain it. Or perhaps it is a foreign government trying to control the narrative in their favour or to create chaos to destabilize the nation. It might also be companies that are seeking to gain an advantage.

The ‘who’ that is behind the electoral interference really dictates which tactics they will use. Freedom House identified three main tactics of digital election interference: 1) Informational, 2) Technical, and 3) Legal.

Informational measures are ones in which online discussions are manipulated in favour of a government or particular party. This includes propaganda, fake news, paid commentators, and the use of bots to either amplify a certain message or attack a different one.

It seems far-right groups often have more success exploiting social media because false, shocking, negative, and emotionally charged content has more virality. The more outlandish the content is, the more it spreads, which means that even more people can see it or share it.

Technical measures are tactics used to restrict access to news sources, communication tools, and in some cases the entire internet, perhaps also involving cyberattacks. By controlling the information flow, you can silence opposing views or redirect people to content that supports a message that is favourable to you.

Finally, legal measures allow authorities to punish opponents and quell political expression. In using legal measures, those in power are able to control online speech during election cycles with threats of criminal charges. This often involves criminalizing forms of speech to allow charges like defamation of public officials to lead to jail time. Legal measures are more common in countries where leaders are more likely to be authoritarian and rely on abusing legal mechanisms to gain or maintain control.

Examples of digital election disinformation:

In advance of Australia’s federal election in 2019, China’s Ministry of State Security used technical measures of election interference by conducting a cyber attack on the national parliament and the three biggest political parties.

Foreign states like China and Russia are also becoming increasingly good at using seemingly ridiculous conspiracy theories to create further societal divisions and compromise political processes. For example, in 2020 and early 2021, and leading up to the January 6th insurrection in the United States, one-fifth of all QAnon posts on Facebook originated overseas.

For anyone not familiar with QAnon, it is a conspiracy theory that falsely and outlandishly claims that Donald Trump has been battling against Democratic politicians and celebrities who have formed a cabal of Stan-worshipping pedophiles.

Essentially, foreign governments and actors were able to exploit a lot of the fear and uncertainty during COVID to push conspiracy theories using informational measures that they knew would lead to violent unrest because creating that chaos was in their interest.


Strong privacy legislation is needed that requires companies to disclose how they use data collected on individuals, what third parties they share that data with and for what purposes third parties can use that data. Such legislation also needs to ensure that there is an obligation for companies to notify individuals in a timely manner if their information is compromised.

In Canada, the current Personal Information Protection and Electronic Documents Act (PIPEDA) broadly allows companies to use data as long as it is collected for a specific purpose that the company would allow in their privacy policy. This definition is far too broad, however, there are changes proposed under Bill C27, the Consumer Privacy Protection Act.

In the US, while voter intimidation is illegal, the protections don’t necessarily extend to the online space. In a previously struck down “For the People Act of 2021”, reforms included the expansion of platform liability by criminalizing voter suppression on platforms. If resurrected, a similar passage could make sure protections extend online.

More liberal governments also need to be proactive in using truthful messaging to push back on authoritarian advances. An example of this is the US recently declassifying information documenting Putin’s preparations ahead of the Ukraine crisis that otherwise would have allowed him to justify his invasion with further lies.


bottom of page