cancel
Showing results for 
Show  only  | Search instead for 
Did you mean: 
cancel
Showing results for 
Show  only  | Search instead for 
Did you mean: 
AppDefects
Community Champion

Predicting Vulnerability Weaponization

Not every vulnerability becomes weaponized (abused by an exploit or malware). In fact, most don't. Of the more than 120,000 total vulnerabilities tracked by the United States National Vulnerability Database (NVD), fewer than 24,000 have been weaponized. As a result, many organizations are turning to analytics and risk-based vulnerability management to prioritize those that are weaponized and have the highest impact. The following article presents a framework for using AI to predict whether or not a vulnerability has the potential for weaponization. It relies upon CVSS scores, which themselves are qualitative, so while I think there is a lot of potential for the research much more work needs to be done.
 
 
2 Replies
CISOScott
Community Champion

One of the places I worked at had a VERY good penetration testing team. Instead of being "tool monkeys" they actually vetted the findings. A tool monkey is someone who runs a tool like "Nessus" and then reports the findings as fact without considering anything about the environment the test was run against. In other words if the tool said it was a critical finding, then they report it as critical. In some critical findings you have an Internet  exploitable vulnerability which is why it is rated as critical, but in an environment that did not have access to the internet it wouldn't be as critical; however a tool monkey would still report it as a critical finding. A GOOD testing team will vet the findings to see if it should remain as categorized by the tool or if it should move UP or DOWN in severity based on the environment, other countermeasures, etc.

 

Your testing team should take the results and contrast it with the environment. Relying on AI may help, but I enjoyed being invited to these vetting meetings and hear the testers argue for or against the points and then come to a consensus on the results.

AppDefects
Community Champion


@CISOScott wrote:

Your testing team should take the results and contrast it with the environment. Relying on AI may help, but I enjoyed being invited to these vetting meetings and hear the testers argue for or against the points and then come to a consensus on the results.


Those are the best meetings to attend! Show me the Proof of Concept (PoC) exploit and I won't be "fooled again".