An interesting attack against the USA, critical infrastructure, which had an immediate impact:
"A cyber attack forced the shutdown of a key U.S. pipeline that carries almost half of the East Coast's fuel supply. Its operator, Colonial Pipeline, says 5,500 miles of pipeline has been closed while containing the breach. An anonymous source tells The Wall Street Journal ransomware appears to be involved. Analysts don’t expect fuel markets to be affected unless the shutdown lasts beyond a few days. The breach is happening after two major attacks on U.S. computer networks, and it shows "core elements of our national infrastructure continue to be vulnerable," per a former National Security Agency official."
Ransomware apparently was involved again. So did a backup failure, or was it a human error?
Backups don't prevent ransomware; they make make recovery possible. Based on the news reports so far, I (and many of us) presume they will be restoring from backups onto the existing equipment. Although restoring backups does take time, completion within their MTD (Maximum Tolerable Downtime --- "unless the shutdown lasts beyond a few days") does seem reasonable -- presuming enough resources (people, tape drives, network capacity) are available for parallelization.
If this extends past the MTD, I view this as a planning failure because one should invest in inventory to ensure the MTD exceeds the RTO (Recovery Time Objective).
The news that comes out next week will tell us if they had a DR (Disaster Recovery) plan that properly prioritized recovery efforts, if their backups were "good" and if they had invested in recovery capacity. If not, their East Coast customers will pay the price --- at the pump.
One thing that came to mind for me is how long has the ransomware or malware been sitting on their systems?
Do their backups contain the ransomware/malware and will it be cleaned before use if it does?
Is there going to be any data loss from the restores?
@GerryS From a practical sense, from my part of the world, the average time to detect that they have been compromised is 293 days on average. Whether they have actually tested their backups in that time is another factor. The associated costs tend to ballon out from the point that they detect and then respond to the compromise.
@GerryS Of course we can always learn from the victims experiences too:
@GerryS One of the questions you asked related to backups. My questions is not if they tested them but whether they had them.
Unfortunately a number of these systems cannot be "interrupted" so they may only have six months to one year of incremental backups which would be extremely difficult to restore from.
ICS systems and security is getting better but unfortunately a long way from being in a good place.
Most of these systems are "far behind" in terms of patching, a number have no malware protection and malware is being developed to target these systems. this new form of malware is being modified (EKANS as an example) to not only disrupt the systems but to in fact specifically stop processes (so for all intents and purposes the systems seem to be functional).
When dealing with this type of system, I recommend the following:
1. Segment the network
2. Test backups for last known good (of course this requires either downtime or a spare system)
3. Stronger authentication to these systems (and of course least privilege)
4. Where and when possible additional technologies that can identity anomalies (you may get push back on this one as folks will counter with system performance issues).
5. disabling ports on the systems
@dcontestiTo be honest organisations and people should be taking heed of NIST SP800-207 Zero Trust Security especially in ICS or Critical Infrastructure and getting their ducks in a good row, rather than not knowing where their ducks actually are and who has access to them any time.
Plus to know more specifics about the attack - register for this online session:
And earn some CPEs at the same time.
All - Harse but true comments about Ransomware:
"I saw an interesting post on here from USCERT re Ransomware. The GIST of the post was if you have a problem let us know we can help. Again I reiterate Companies need to take responsibility for their own GRC and Cybersecurity defense. Suggesting that they call Ghost Busters, if you see a Ghost is not good use of resources, sure it is nice to know if everything was done and help is needed it is there but suggesting DONT PAY THE RANSOM we will sort it out is just a false Economy! You do understand the Hackers are operating more like Tony Soprano than the great unwashed in their Mothers basement. You don't pay they break a finger - Metaphorically speaking. CERT has a place and much value but allowing Governance to subjugate their responsibility is not one of them! Same to Law Enforcement, If every company thinks hey I don't need to worry about this then that is just terrible allocation of resources by CERT and Law Enforcement. Regulate Fine and make Liable Bad Governance or you will have a Dogs Breakfast on your hands.
Comments taken from a colleague on the situation regarding Ransomware. Courtesy of David Brennan
@dcontesti More advice coming and look at the relationship to Zero Trust Security as well:
Great insights into the latest attack against critical infrastructure.
OK, now I have been able to confirm it.
The pumping control system wasn't what was affected by the ransomware attack. It was the billing system. Colonial shut down gas deliveries to the eastern half of the US because they were afraid they wouldn't get paid enough.
Oh, and they have paid the ransom. $5 million bucks.
(And the media reports are still confusing ransomware and breachstortion ...)