Given the thread discussing mitigations of the recently disclosed processor bugs, I thought it might be helpful to put together a brief reading list explaining the what and how of the bugs, and some related info.
For a not-too-technical read, i’ll first toot my own horn with regards to what some of the lessons learned from an engineering aspect (and the importance of qa generally):
The Raspberry Pi folks, in explaining why the Pi computers are not vulnerable put together a nice, easily digestible explanation of how superscalar processors, out of order execution and speculative execution work:
For the technically inclined, a deep explanation of the bugs and some poc exploit code is available from Google’s Project Zero, whose Jann Horn independently identified the same bugs the Austrian researchers did:
Dark Reading has a fairly well put together explanation discussing some of the history as well as what the exposure and risks are, much better than the non-technical press which has been in full-on chicken little mode:
The Linux Kernel Mailing List and FreeBSD Security mailing lists are good resources but likely of limited utility unless you’re really into the nitty gritty of the fixes.
It is worth noting that while the FreeBSD Project was apparently informed of the issue in December, it is unclear what the fix there might be, and they are behind Linux and Windows in delivering. Advanced notice was apparently not provided to OpenBSD (known for being extremely hard-core with regards to security), or the FreeBSD down-stream project HardenedBSD. A post regarding the current state of mitigations in the BSD worls can be found here for those interested:
Good list. I liked this SANS webcast quite a lot too. Good, concise rundown of the processor design flaws each one abuses, the most probable exploit scenarios, and some thoughts on mitigations:
Thanks forPosting WDF,
It's going to be very interesting when all the fixes, patches, firmware tweaks are put into place just how much compute performance the industry is going to lose until all the silicon is refreshed. Cloud operations that are running close to optimally will probably need to put a few more servers on the fire, and applications and services running on choke points such as(dare I say security) appliances(you know all those fancy edge firewalls etc) my need more tin for throughput.
Intel(and others) have an interesting problem I think on three fronts(a number of lawsuits) https://www.engadget.com/2018/01/05/intel-faces-multiple-lawsuits-spectre-meltdown-vulnerabilities/, shifting the affected inventory that they have not sold as yet(if I have enough capacity and commercial savvy I'm going to put the thumbscrews on), and putting in better QA and re-tooling to make better product.
Now you can set that off against a massive opportunity for silicon manufactures, all the components that sustain humanity and it's various hobbies will need replacement after the higher levels of scrutiny required are put into place, but even so bumpy ride...
Your last point on NettApp reminds me of a conversation I once had with a tin foil hatter on a particular vulnerability - as these were not high assurance systems I walked away in the end leaving it with something along the lines of:
"So, if the attacker roots the box, and has arbitrary code execution then they could execute code that would retrieve the encryption key from memory if you hadn't scrubbed it by then and allow them to steal the users non-adminstative credentials? Maybe, I'll just install a key logger and wait..."
Not the same thing, and obviously closing off every vulnerability is a good thing, but the remote vs local exploitability debate that happens in triage is always interesting, and when something affects everything what do you do first?
Very good summary - particularly enjoyed reading why the RasberryPI is not vulnerable:
It explains in simple terms the CPU performance features at the
heart of metldown and spectre.