Your list, your choice, but I liken it to the fact that we concern ourselves with phishing, although it is not a technical hack.
You have a good point to relate the human factor and use of deception to compare phishing to disinformation campaigns. However, the difference I draw is that a phishing attack uses the deception of a human to leverage access into the IT technology. The other social attacks you addressed use deception and psychology to influence humans to take specific action, that is vote a particular way, but that does not grant access into the IT system.
And, at the moment, the US electoral system is pretty much unprotected.
@rslade that might be employing a bit of hyperbole. The news story is about the Federal Election Commission not being able to meet due to a quorum. The FEC has very little to do with electing - more with campaign finance. In the U.S., elections - even those for national office - are run according to each state's make-up. In most cases, it is up to the city or town in question to conduct the elections however it wants. They determine the official count which then gets reported and aggregated at the state level (or possibly first to the county level and then to the state). Each city or town uses its own mechanism, which can change in any given year according to the whim and budget of that municipality. Even if someone were able to compromise the aggregation of votes in an entire state, the contributing cities and towns would have the data with which to cry foul.
In all seriousness, I think these continued claims that election mechanisms are subject to compromise are the attack in and of itself. The first step to truly hacking U.S. election is to convince the people and the politicians to build some monolithic election system. Now you have a single point of attack. The inefficiency of the system is also its strongest defense.
Personally, I do not view InfoSec as inherently technology bound, nor do I see much difference between "Vote for Alice" vs "Give Bob money". The most common phishing example is "Hey CFO, write Bad Bob a check. Thanks, CIO.". Although this might come via email, it could just as easily come over a phone call, a formal letter or even a post-it. Some, but not all of these involve technology, yet we defend against them all.
To protect an organization, one needs to consider all avenues of attack, not just those for which involve technology, nor just those for which we are responsible for defending. When a risk arises that is outside of our expertise, we need just enough knowledge to recognize its impact, know which experts (PR, legal, management, law enforcement, medical, etc) to engage and with what urgency. Our involvement often comes from the fact that technology amplifies the risk, not necessarily because it is the cause or target.
So if it were my election security list, it would include "disinformation campaign", as a risk but the mitigation would likely be deferred to our partners outside of IT. But, as I said, your list -- your decision. I'm just here because I find the topic and the discussion interesting.
If a "tech tie-in" is a prerequisite, keep in mind that many of the disinformation campaigns are successful precisely because technology has significantly elevated the volume (both amplitude and quantity) of non-vetted sources, particularly social media, and us IT people can definitely play a role in detecting the "fake news".
The inefficiency of the system is also its strongest defense.
Also, as has been pointed out, "A huge breadth and diversity of counties means a huge breadth and diversity of security capabilities."
To protect an organization, one needs to consider all avenues of attack, not just those for which involve technology, nor just those for which we are responsible for defending.
Lots of good points in that post. I doubt there is a professional among us that hasn't experienced people who really shouldn't own a smartphone, laptop, or possibly a Twitter account. A huge part of our job these days is security awareness, but often it feels like teaching driver's ed to people that already have their licenses and have bought a car. There's a reason why we wait until kids reach a certain maturity, have some preliminary experience, and pass a test before we make them drivers. Even then it is often probationary. Yet, any moron 18 or over can vote.
If certain democracies have failed in their election systems, it's not in the technology. It is in the broad education of their electorate. Classic skills of logic, debate, and reason have atrophied in U.S. society. This has happened despite rising achievement as measured by college degrees attained or high school test scores. What's interesting are the emerging data points that show Millennials are more likely to be caught in confidence schemes than their grandparents. I think a huge factor in this we have raised kids in an environment that when faced with a "problem" they are to select the "correct" answer from the following choices - i.e. standardized testing. That is not how anything in the world works. Sure it generates these great metrics but it teaches kids to discount other possibilities. This pretty much defines politics today: If you don't agree with me, then you are a fascist, radical, elitist, etc. That lack of thinking, empathy, and logic is what makes us so susceptible to the potential social engineering of a campaign. Quite honestly, far more of that kind of garbage originates within the U.S. than outside of it. I don't see the Russians, for example, taking over the Commission on Presidential Debates, which has continually lowered the bar in terms of debate quality by inherently raising the bar to entry to the debate stage.
Windows CE 5.0? You've got to be kidding me ...
(A really great piece, amalgamating all the potential problems ...)