Include CVSS for each finding.
It's useful to have an excel sheet with each finding in it or a small database to track remediation.
@msaikumar87, while the report from the platform will usually categorize vulnerabilities based on their severity, you can take this a step further if you've classified assets being scanned scanned --- fixing vulnerabilities can then be prioritized based on how critical the asset is.
(Then again, if you're providing a service, this part would be the client's responsibility)
See if you can expand your view beyond what the Vulnerability Assessment tool tells you. What I mean by that is this: It is one thing to perform a scan and dump the results out in to a nice report, it is another thing to be able to analyze the report for what the tool doesn't tell you. If all you do is run the tool and spit out the report, then you are what I refer to as a "tool monkey". Lots of people can run tools and spit out reports, some will do it better than other and some will do it worse. The key value is to analyze the report and point out what the tool won't tell you. Here are some examples:
1) If you see a piece of software that has 21 updates missing (Adobe Reader and Oracle/Java come to mind) then that might indicate poor patching or no patching process.
2) If you only see a lot of 3rd party software vulns (Adobe/Java/Flash/etc.), but not Microsoft vulns it might indicate no patching of 3rd party software.
3) If you see all/most machines have the same number and identical type of vulnerabilities, that might indicate that their is either a bad/non-existent process for patching at all.
4) If you see wide disparity in the number of vulnerabilities between machines of the same type i.e. Win 10 workstations (i.e. one has 30 vulns, another has 5, another has 123, etc) it may indicate that the patching process is applied inconsistently across the enterprise.
5) If you see servers or workstations that are old and out of extended service life (Windows XP, Vista, 7, Server 2003, etc.) it might indicate lack of IT competence to upgrade an environment. It could also indicate poor network access control with rouge devices or Shadow IT.
Those are just some examples of what you can extrapolate from a vulnerability scan. Here is a real life example from one of my past engagements:
An outside agency was hired to come in for the 6th year in a row to do a VA on a gov agency (I was hired in the 6th year to this gov agency). P.S. You should never hire the same pen testing company for more than 2 years in a row. This company had a bunch of tool monkeys. They came in, ran Nessus, printed out the reports, and then blasted the cyber unit in front of agency heads. Since I was still in probation, I had to bite my tongue while they ranted on and on about how bad we were. Here is what they basically said:
"You guys are terrible. You have over 13,000 vulnerabilities. You had about the same amount last year. You guys really suck at information security. Here is our nicely formatted vulnerability assessment to prove it."
Well hold the phone............ Here is what I wanted to say to them but couldn't, with me being the new guy.. (I later did tell the CISO and CIO this....)
"Well you scanned our 3 networks, Dev, Test and Production, which are identical, so divide that number in thirds. So that cuts the number down to 4,333 for each network. Most of these vulnerabilities were from 3rd party software. The contractors we hired to perform our patching were only hired to patch Windows OS, Oracle Database, and Red Hat. If they so much as touched anything else they could be let go from the contract if a mistake was made. You guys performed no root cause analysis, no logical thinking outside of what the tool told you to say and have provided no value to the organization."
I was appointed as the Tiger team lead to look into this. When I looked into it here is what I found:
1) The government agency decided they wanted all of their government staff to be IT managers and to contract out all of the IT worker bees.
2) The contracting group wrote the patching contract with little to no involvement from IT and they made it so narrow in scope that the contractor could only perform patches/updates (but not upgrades) on the three things in the contract, Microsoft Operating System patches, Oracle Database patches, Red Hat patches). This contract wording did not include upgrades so if those 3 items were out of date and required an upgrade the contractor couldn't do it. Microsoft Office wasn't included in the wording so no patches for them. Neither was IE.
3) Since all of the IT workers were contractors, the agency had no recourse to tell someone to go patch anything without violating the contract. None of the IT managers were IT savvy enough to be able to patch systems.
4) The people responsible for patching were not competent.
5) Since all 3 environment were IDENTICAL (same number of findings for each, and yes they did have the same number on each, I didn't just divide 13,000 by 3. I can't remember the exact number) it showed a lack of patching progression which one would expect to see in a mature organization. One would expect patches to be tried and tested in dev and or test before being approved for production so production should always be behind in patching levels and not identical.
So the key problems were these:
1) No government IT workers in place to be able to fill the gap that the contracts didn't cover. It took anywhere from 3-9 months to write a contract and get it approved. This meant no coverage for things that happened during that time.
2) Poor contract management/writing. Needed to involve IT more in the creation and review of the contracts.
3) Poor patch management across the test/dev/prod environments.
4) Since all 3 environments were IDENTICAL, live production data was being used and stored in dev and test. This eventually required a contract to hire a company to come in and mask all of the real data in dev and test.
5) 3rd party programs had not been patched since being installed. HUGE attack surface.
6) It exposed the lack of competence of the IT organization on how to fix its own problems. The only suggestion they made in previous years was "We'll have to fix the contract!" But they never did.
7) Since they had used the same pen test company 6 years in a row, the company did the same thing with the same people and found the same thing. Year after year. Nothing changed. By rotating companies you at least have a chance for someone to try something different that may find something that wasn't found before. Most good pen testing companies wouldn't let you sign them for more than 2 years concurrently.
8) Since the pentesting company was incompetent to provide any advice beyond what the tool monkeys said, they happily kept signing up for renewals. They offered no real value after the second year.
So you see, the real value is being able to assess what the results lead you to, not the results themselves.
Step 2 after the VA is how d you go about fixing it? You have to have some tool/process for assigning responsibility for and due dates for fixing it. And then tracking until completion. Do you have someone patch and then ask you to scan again? Do you run a scan in a month or two and compare to see progress?
Stuff like that.
Running the tool is one thing, being able to provide competent advice on the path forward is another.
@CISOScott , really appreciate for being so detail to help on this regard definitely it gave me some pointers on the mindset to have for the analysis. For this customer, patching is handled by their internal teams and this is my first VA engagement. Currently, working on the presentation deck and leveraging inputs provided by you and others for my post which are really helpful. Though my SOW involved intranet applications/network most of them are related to missing MS, VM ESXi patches, use of SSL v2/v3, self-signed certs, SNMP default configurations, use of telnet, iDRAC vulnerabilities, use of SSH v1...all of which definitely are pertaining to their internal networks and nothing related to 3rd party apps. Definitely can see lack of sound patching policy in place which I was informed due to lack of manpower.
On my part have collected aforementioned vulns and will show them possible attack surface each of it creates and recommend mitigation steps. However, at this point have no clue on asset criticality therefore cannot measure business impact which is what I'm proposing them as immediate next action.