Has anyone found a way to mitigate this risk or how did you word a deviation. We are beating our heads against the wall on how to address this with FedRAMP audit.
On one hand, if they have remote administrative access (or God forbid local access) to my servers... seems like I'm already in trouble before they change the boot loader.
Any advice will be appreciated!
Thanks,
Chad
@cweatherford I am sure others will come forward, as usual I did Google around:
Boothole vulnerability
BootHole vulnerability in Secure Boot affecting Linux and Windows
Windows has recently released a patch for the boothole vulnerability
https://support.microsoft.com/en-us/...7-d0c32ead81e2
Based on the https://msrc.microsoft.com/update-gu.../CVE-2020-0689
For Windows server 2016
They installed the update based on this:
1. Servicing Stack Update KB4576750
2. Standalone Secure Boot Update Listed in this CVE KB4535680
3. Jan 2021 Security Update KB4598243
However, there appears to be a method of verifying whether or not it has been fixed:
"Based on https://msrc.microsoft.com/update-gu...lity/ADV200011
Just run this command to verify?
[System.Text.Encoding]::ASCII.GetString((Get-SecureBootUEFI db).bytes) -match 'Microsoft Corporation UEFI CA 2011'
Often the advice is wait for the vendor to release a fix for the problem. You may have to do some additional homework to verify.
Apparently some Red-hat and Centos patches, were not allowing the application to reboot, after application. So there an obvious need for thorough testing before applying to production systems.
Regards
Caute_Caute
Thanks Caute,
I have seen this information. I was wondering what others are doing to mitigate or remediate the issue.
We had several show up that were VMware guest, no issues on XEN. We tested and this disabled Windows SecureBoot. Not Ideal, but it did remove the vulnerability until there is a solid fix.
This is really one of those where "if they got that far in, does this really do more for them?" questions come up from management. Which is a valid point.
Also, we did try the "UNTESTED" patch (in test), it did not remove the vulnerability.
Chad
I did some more digging around, the best advice at the moment, appears to be treat the affected system as though the system has been compromised, i.e. assume the attacker will have obtained Admin or privileged access to the system, and therefore it will be used as a potential platform to attack other systems either internally or externally or potentially used to exfiltrate data for the usual nefarious purposes.
There are several approaches 1) Isolate the affected system 2) Cleanse and rebuild the system from scratch 3) Test it after applying the current patch and test it again. Before putting back into production.
I agree, this is a costly exercise, but the implications of not cleansing and burning it down, are too risky to comtemplate, especially as the average time to detect and determine whether a system has been compromised is on average 273 days - which is costly enough, due to the costs of investigation, reporting and analysis i.e. root cause analysis etc.
Your management are right to question the impact and associated risks - effectively a dangerous weapon in the wrong hands, which can no longer be trusted until verified it has been thoroughly cleansed, tested and re-built and reloaded with a known good image etc. This is regardless of whether it is Windows, Linux or even Unix etc.
Regards
Caute_cautim
I do not know if you are still dealing with this, but I found a few things.
I have not had success with applying it to an affected system, but this could be due to my lack of PowerShell knowledge about how to run these commands. The split dbx does seem to work, but the securebootuefi step is giving me trouble.
@johnharrisjr Well done, given that Microsoft, appear to have a load of Zero Days and apparently don't have the resources to fix many of the issues arising. It will be interesting to watch this space, to see how many more issues like this turn up.
Thanks for your due diligence.
Regards
Caute_Cautim