Hi All
When it comes to incident response and a privacy investigation everything is revealed:
Background: Optus Australia Telco, owned by Sing-tel - recently had another incident, when Sing-tel routers did not recover as fast as they should have done, causing another major outage this year.
https://lsj.com.au/articles/no-secrets-for-optus-findings-on-data-breach-can-be-revealed/
Latest:
Regards
Caute_Cautim
Sadly, this happens more often than not. Shows that GOOD PATCH management processes are key.
We once had an incident where computers began rebooting and presenting the Blue Screen of Death. The Computers were distributed across a 900 acre campus with over 30 Kilometers of internal roadways, and hundreds of buildings so the reports were hit and miss and initially just seemed to be random only happening say once an hour or so. Finally, it hit a computer in IT and we were able to investigate while the computer was being affected (in other cases, it would clear once the system was rebooted so troubleshooting was almost impossible).
Upon investigation (forensically), we were able to determine that we had been hit by STUXNET. Remember this virus was intelligent and checking to connect to specific SIEMEN's PLCs (programmable Logic controllers). Fortunate for us, we did not have any on the network although a large network with many Industrial Control Systems..
So what happened? We had a huge print shop in house that used very specific VENDOR (left blank for the guilty) who came in unannounced with a new version of their code. The staff in that area, allowed them to update the computers in the print shop. Surprise, surprise. Their distribution kit was infected with the Virus.
What happened after? Staff in the print shop were educated about proper Change Management. The VENDOR was put on notice and actually were presented with a BILL.
So ensuring that NO ONE touches anything on your network without the proper approvals is key.
To me this is a case of shoddy Patch Management.
MHOO
d
@dcontesti wrote:
... Their distribution kit was infected with the Virus... To me this is a case of shoddy Patch Management.
In a similar case, I was once involved in the cleanup where an end-user opened a virus infected email attachment. In that case, prompt deployment of a patch that had been released a week earlier would have prevented widespread impact.
In the end, I am not sure one can find a happy middle-ground between speed and caution in patching.
IMHO, the bigger lesson is defense-in-depth. We need to accept that we can not stop all bad things, so we also need to focus on preventing badness from moving laterally. For example, could the printshop be limited to only connecting to servers located in the Data center, perhaps with minor whitelisting of east-west traffic (e.g. Domain Controller to Domain Controller).