One of the places I worked at had a VERY good penetration testing team. Instead of being "tool monkeys" they actually vetted the findings. A tool monkey is someone who runs a tool like "Nessus" and then reports the findings as fact without considering anything about the environment the test was run against. In other words if the tool said it was a critical finding, then they report it as critical. In some critical findings you have an Internet exploitable vulnerability which is why it is rated as critical, but in an environment that did not have access to the internet it wouldn't be as critical; however a tool monkey would still report it as a critical finding. A GOOD testing team will vet the findings to see if it should remain as categorized by the tool or if it should move UP or DOWN in severity based on the environment, other countermeasures, etc.
Your testing team should take the results and contrast it with the environment. Relying on AI may help, but I enjoyed being invited to these vetting meetings and hear the testers argue for or against the points and then come to a consensus on the results.
@CISOScott wrote:Your testing team should take the results and contrast it with the environment. Relying on AI may help, but I enjoyed being invited to these vetting meetings and hear the testers argue for or against the points and then come to a consensus on the results.
Those are the best meetings to attend! Show me the Proof of Concept (PoC) exploit and I won't be "fooled again".