Three Ways to Hack the U.S. Election

2024 Cybersecurity Predictions


Use of Cyber Deception to Influence Voters

Rather than directly manipulate votes, cyber attackers are now finding success in misleading the voters themselves. Once again, social engineering triumphs over technology. The key to this tactic is simple, and cyberwarfare researcher The Grugq said it best, “People will believe what their computers tell them is reality.” But how these attackers methods differs is based on their goals.

The FBI has identified the three top threat actors to U.S. interests with regards to election security: China, Russia, and now, Iran, Each threat actor has different goals and operates differently based on those goals, although their basic technical methods are similar.

China, for the most part, follows a more traditional model of influence. It tries to persuade key influencers (sometimes called shills), both individuals and companies, with economic levers to portray China in a positive manner. Influencers are also encouraged to oppose negative news about China by downplaying it, not mentioning it at all, or criticizing those who promote it. Sometimes these influence campaigns are about China in general, such as “China is a technological leader,” and other times, they are about current inflammatory political causes. Their campaigns tend to be low and slow, risk averse, subtle, and meticulous.

Iran, a recent player in this game, tends to similarly target influencers, and it’s attention focuses mostly on its own reputation and other regional players. There have been indications that Iran has tried to influence the U.S. elections, both for the 2018 midterms and the 2020 presidential election.

Russia, however, is an outlier. Instead of targeting influential individuals or organizations, which will then be used to alter elections, it targets its efforts to affect the general U.S. population. It also does not necessarily promote a particular cause, candidate, or agenda, but rather seeks to sow chaos and discontent among voters. In this process, it uses particular political or ethnic groups to play off each other. For example, the Senate Intelligence Committee’s Russia investigation of the 2016 election found that Russian influence efforts heavily targeted African-Americans, noting that, “By far, race and related issues were the preferred target of the information warfare campaign designed to divide the country in 2016.” There are newer tools, primarily Internet-based, to help citizens navigate news bias and disinformation such as Snopes and AllSides, but these do require additional effort and Internet media skills that many older voters lack.

If there is any central theme to their influence efforts, it is “America is terrible.” Russia is also the most well-funded, aggressive, and experienced of all the major threat actors targeting elections. According the FBI, the four major goals of Russian election influence efforts are:

  • Divide and demoralize
  • Muddy the public discourse
  • Discredit and undermine
  • Distract

Divide and Demoralize

Russia seeks to stir up both sides of a debate and stoke up fights. If a particular political issue is trending, they use targeted social media advertisements and false online identities to intensify the extremists on both sides of the issue. For example, if there is a police shooting of a African-American youth, they target both white supremacists and black activist groups with distorted messages about the issue. The goal is to create chaos as well as discourage any meaningful resolution of the problem.

Muddy the Public Discourse

Russian operatives are also known for injecting fake news into the public discourse to confuse the issue and obscure the truth. People become overwhelmed by the torrent of conflicting stories and fall back to cognitive shortcuts, which means trusting sources that affirm what they already know. In this way, filter bubbles are formed and political positions are cemented. Historically, Americans have been conditioned to expect professional journalism, where news media (even if biased in analysis) has historically still based their reports on solid facts. In recent years, this has shifted as news channels began to “brand” themselves for particular audiences and began cherry-picking news stories and adding more commentary to reporting. On the Internet, especially on social media, the responsibility for verifying facts shifts from the publisher to the consumer. Many Americans, especially ones raised before the Internet was ubiquitous, are unaccustomed and untrained in investigating Internet news. The result has been called the rise of the “low information voter.”

Discredit and Undermine

Similarly to fake news, this tactic involves impersonated trusted organizations and individuals online to send false news to discredit them. For example, impersonating election officials on social media or email to provide false information about registration. This tactic is meant to supress voter turnout and manipulate voter perceptions about the integrity of the election.

Distract

When hot-button issues and large debates are underway, Russian operatives fans the flames of argument via retweeting, sharing, and repackaging messages to amplify and prolong the disputes. Many of the messages that finally reach voters have already been relayed and amplified many times over, making it difficult to trace back to the originator. All of these methods work synergistically to build upon each other for maximum effect.

Technical Methods

What specific technical methods do Russian influencers use? Well, the easiest is to use legitimate tools such as purchasing advertising and open source intelligence data just like online marketing firms do. In many cases, influencers were not directly buying these ads with a check written from the Kremlin bank, but instead used false front companies to misrepresent themselves.

According to the U.S. House of Representatives Permanent Select Committee on Intelligence, one of the Russian false front companies, the Internet Research Agency (IRA), purchased 3,393 advertisements on Facebook that were shown to over 11.4 million Americans. They also created 470 Facebook pages with 80,000 pieces of organic content, which were shown to more than 126 million Americans. Remember that only 120 million votes were cast in the entire 2016 Presidential election.

Another technique that is even more dishonest is to use bots as “sock puppets” or “Non-Player Characters (NPCs)” to create false followers and influencer armies online in what is called “astroturfing.” This technique involves using bots to impersonate humans on social media accounts to create and amplify social media messages. Sock puppets are a common tool for advanced attackers, both state-sponsored and large hacktivist groups. Both the Russian and Chinese governments have been accused of using these kinds of bots to pump false information out into social media. F5 Labs has profiled Russian efforts to compromise and exploit home IoT devices, which could easily be used for both surveillance and creating sock puppet bots. In some cases, these bots are using entirely fake identities with made-up names and, in other cases, stolen identities are used to create the bots.

Another powerful technique is leaking and doxing, which we discussed on F5 Labs as a common hacktivism tool: Doxing (dox being short for documents, or docs) involves the publicizing of private or personal information on the Internet about someone to intimidate or embarrass them. On a broader scale, leaking is the publication of carefully curated and incriminating emails or confidential documents, which can be used effectively against organizations or public figures. The most famous expression of this was the hack of DNC email in 2016 and selective leaking of embarrassing emails to WikiLeaks. Recently, Iran was accused of attempting to do the same thing for the 2020 presidential election but they were thwarted by Microsoft security.

Blocking Deceptive Influencers

In response to this threat, the FBI has launched its Protected Voices campaign, working directly with campaign officials across the country to bolster their cyberdefenses. Since we live in a country that values free speech, the FBI is not in a position to censor or stifle political messages, especially since it is too hard to discern an American citizen from a bot at scale. However, social media platforms are in a better position to do this and are within their purview to scrape and deny bots on their platforms. The FBI believes that exposing these bots is a key strategy, as it’s one thing to hold an opinion but it’s another to know that you’re repeating an opinion that originated in Russia.

Modeling Election Attacks

Another aspect to consider is what kinds of attacks we should expect, and therefore defend against in the future. To do this, we need to look at which attack methods are most cost-effective, and thus scalable and repeatable. Attacks need to be scalable and reliably repeatable because a U.S. presidential election involves manipulating hundreds of different venues across the nation.

In the simple table below, we outline the three primary attack methods: voter registration, voting machines, and deceptive voter influence. Attack effectiveness considers target availability, attack success likelihood, and attack impact or how many votes affected. Attack cost takes into consideration resources expended, both money and time in isolating the correct targets, weaponizing exploits to use on them, and managing the attacks. It also is a rough measure of personal risk to the attacker in looking at their exposure to getting exposed and arrested. By looking at both these dimensions, an overall attack value can be derived.

Attack Method Attack Effectiveness (Low – Very high) Attack Cost Attack Value
Target availability Attack success likelihood Attack impact
Hack voter registration Low—Need to find registration sites that are hackable in useful manner Low—As far as we know, has not happened yet High—Adding new voters to vote for preferred candidate is highly effective Very High—Must create fake voters and accompanying fake identification, place fake voters at risk of physical capture Low—Attacks are effective when they work but are hard to reliably reproduce at scale
Hack voting machines Low—Must target the voting machines in specific polling places in specific swing districts, which may not be known until the election is imminent Low—If attacker can get physical access, attacks are likely to succeed, but physical access can be difficult High—Untraceably altering votes to preferred candidate is highly effective Very High—Must create specialized malware and physically put it on voting machines in specific swing district polling places while placing malware infectors at risk of physical capture Low—Attacks are effective when they work but are hard to reliably reproduce at scale
Deceptive Influence: Leaking & Doxing High—Many campaigns with systems to target and most emails contain embarrassing details Medium—Gaining unauthorized access to political campaign has proven not difficult High—Has proven to effectively sway significant percentage of voters on key issues Medium—Requires hacking into protected email and document repositories High—Attacks are easy to attempt, reasonably effective, and low cost, low risk
Deceptive Influence: Use of social media bots High—Many voters on social media who are easily identifiable and profiled High—Many resources available for social ad spending and bot impersonation High—Has proven to effectively sway significant percentage of voters on key issues Low—Bots are cheap and easy to weaponize in this manner High—Bots and social media are very easy to use, low cost, and have proven effective

From this, we can see that deceptive cyber influence methods are more attractive, especially if the goal is to cause election chaos and not necessarily to elect a particular candidate.

Discussion

When the subject of “election hacking” comes up, the first response that many people have is to spare no expense as this is vital to protecting democracy. However, in reality, the U.S. government appears to expend no more in cyber defense protecting voting than most organizations do to protect our hospitals, emergency alert systems, first responders, financial system, power grid, or privacy. In other words, we put some effort into it but many attacks still slip through. And, like everything else in cybersecurity, we seem to focus more on technical attacks and defenses, while ignoring the more devastating social engineering attacks. So how much do we care? The U.S. government has put forth a noticeable effort to combat election hacking. However, it is about on par with what it does to counter most other cyber threats, that is, not a huge effort but a significant one. In our personal lives, most Americans have gotten desensitized to the breach-of-the-week and ransomware shutting down hundreds of schools and municipalities. Perhaps this isn’t seen as that big of a threat?

Given that the 2020 election is already underway, voters need to be aware of these kinds of deceptive tactics used to influence their votes towards foreign agendas. Security awareness at the consumer level has never been more important and begins with spotting media bias, which often mixes drama and opinion with real facts. Even though most of the major social media platforms are working to eliminate bots and fake news, identifying bots on social media is still going to be a vital skill for news consumers. One key giveaway for fake news stories are social media pushes of screenshots of media news stories, often without any context, instead of sharing actual links to the real media site. Politicians are working to secure their campaigns, although the efforts are mixed. Elizabeth Warren, Cory Booker, and Bernie Sanders are receiving high marks for their site security while others are lagging.

In the end, if a nation-state attacker’s goal is to undermine the confidence of the election results, they may not even need to succeed in hacking voting machines. If just enough credible news about a potential election hack can take root in citizenry’s mindsets, cyber-influence efforts could amplify this to question the election results and cause chaos.

In terms of cyberwarfare, we often think of hacking power grids, but hacking an election to create a political climate favorable to a nation-state seems to be a much more powerful technique. It would be a nation-state’s ultimate cyberwar victory to bend an opponent nation to their will without firing a single shot—and making it seem like they did it to themselves. We need to take care to ensure that doesn’t happen.



Source link
lol

Use of Cyber Deception to Influence Voters Rather than directly manipulate votes, cyber attackers are now finding success in misleading the voters themselves. Once again, social engineering triumphs over technology. The key to this tactic is simple, and cyberwarfare researcher The Grugq said it best, “People will believe what their computers tell them is reality.”…

Leave a Reply

Your email address will not be published. Required fields are marked *