Some thoughts on vulnerablitiy management that some may find interesting over on Cybrary.
https://www.cybrary.it/0p3n/thoughts-vulnerability-management/
Kind Regrads
Gary
A good response, these days, we need to definitely automate, we cannot rely on traditional methods using resources, which are definitely dwindling. Other techniques include BigFix Lifecycle, as you stated in a similar way to Shavalik, and SSCM. The same principles need to be applied to Compliance and baseline policies, which ensure that patches are applied in good time, after sufficient testing and come from a reliable verified source. These would be applied, and rollback available if there were issues and additional manual intervention is required. It in fact is a form of "hygiene" for systems regardless of the underlying Operating System and applications used. I know of at least organisation, who has an "n-2" approach this is definitely not the best approach these days, as they themselves missed the original WannaCry attacks, due to the fact good security intelligence was available and they were made away of the forthcoming attack. Fortunately they responded accordingly to notifications and carried out the necesary patching. They were extremely lucky, many others were not.
I think you are spot on.
Vulnerabilities are more the result of mis-configurations and leaving systems in their default configurations. So, user awareness training for system admins and information users should be prioritise to limit the number of vulnerabilities. In addition, having hardened system images that can be cloned for all new installations and configurations can indeed help in reducing the number of vulnerabilities.
We need to go a step further and make vulnerability data more immediately available to systems engineers, proactively engaging them to make remediation a part of their deployment cycles. Vulnerability management does not equal patch management - not all vulnerabilities are addressed by a patch and there are legitimate business reasons for alternate configurations. Technology changes will continue to accelerate and we will continue to fail if we separate those who make security decisions from those who make design decisions. DevOps is all about telemetry so we need to make security requirements and its data a part of development and delivery pipelines.
I agree: However the security community and many vendors are actively collaborating towards the CVE approach: https://cve.mitre.org/ Which is a standard way of ensuring vulnerabilities are reported and hooked into the vulnerability solutions available on the market.
So to take an example: https://exchange.xforce.ibmcloud.com/vulnerabilities/132756
You will note that the site immediately uses the CVE rating with a CVSS rating along with sufficient information for most users to realise the impact, severity and the potential remedy.
Take the recent "Bad Rabbit" situation: https://exchange.xforce.ibmcloud.com/collection/XFTAS-SI-2017-00001-Bad-Rabbit-51701e9c25aaaf7e02b19...
This is an advisory, so such attacks can provide early notifications of pending issues - something vulnerability scanning and management cannot provide. This is an initiative by IBM, to raise the bar, and ensure people and systems actually collaborate actively, sharing information both in the public and by private groups as required. But at least the information is available in a digestible format for those interested, and also in summary form, which can easily be digested by senior staff as well.
So by using vulnerability scanning and collaborative sharing of information, one can be much be prepared with timely notifications and to follow through to ensure they are protected or at least plan for the worst scenario.
First, kudos to all who replied on this topic. Your thought processes and ideas on this topic are spot-on. We have in place various methods to make this about risk. The higher the CVE number, the greater the risk of the VULNERABILITY.
I have stressed repeatedly to my students in my classes that the top 1-2 priority should be INVENTORY: Hardware and Software. How do you protect what you don't know about? And what you DO know about, how are you protecting it?
There is a critical position that I feel is missing in this argument/discussion: EXPLOIT-ABILITY! If I have an embedded system that is totally off-network, and it has a high vulnerability, do I stop it from flying/sailing/building/cleaning etc., just because it has a high vulnerability? If the EXPLOITABILITY of the device is LOW, I think the CVE rating should be downgraded. There are products out there that do that. They work in conjunction with industry rated vuln scanners (i.e. Nessus), and re-prioritize the findings to have you focus on those systems that are actually an issue.
Everyone chases the CVE rating or CVSS scores. For the most part, you can safely do that. But as a hacker, do I necessarily want to focus my attention on the High or Critical vulnerabilities, knowing that MOST companies will patch those within 24 hours, or a week?
This is also a mind-set. I have seen this at several Fortune 500 companies, where the cycle of 30 day patching becomes imbedded and though-of as a requirement ("oh, we need a patch, well, we do that every 30 days). That mindset is outdated and wrong.
Excellent: Keep the good work up with your students. Which leads into the subject of "hygiene" - i.e. as you state if you don't know what you are protecting, and it is regularly updated and maintained, then things can leap out of the dark and bite very hard indeed, when you least expect it. This relates closely to Service Management and ITIL practices. Basic "hygiene".
The 30-day target for applying patches to high risk vulnerabilities is difficult to meet with production service systems. Deciding whether to apply each patch requires understanding the impact to production systems and any interaction with the running configuration. Patches do not always work as expected and patch installation can be complicated for many systems such as DBMS or even high availability network devices. So patches need to be tried first in a staging environment that matches the production environment. Not every one has such an environment. Patch installation instructions need to be bullet proof and account for the specific production service configuration. And guess what, the patch from the vendor may not be available for the particular OS or configuration you use. Or the patch may end up requiring an OS upgrade that necessitates more testing with production services. In a large development and support organization, ticking off all the boxes and getting approval to deploy might take a few more days. So the 30-day target can be a tough one to meet, even for CAT 1 critical vulnerabilities.