Showing results for 
Show  only  | Search instead for 
Did you mean: 
Showing results for 
Show  only  | Search instead for 
Did you mean: 
Contributor I

Frequency of Vulnerability Scanning

Hello folks,

I am a bit frustrated by this topic because of lots of different opinions. Let me know what do you think - how often should a company run vulnerability scans on their networks/infrastructure? I am asking about the network/server patch/hardening type of scans - not a pentest or similar. Some say - bi-weekly, others monthly, quarterly, I've even heard an annual scanning practice (in my opinion - wayyy to long). Appreciate your time.

19 Replies
Contributor II

I’d say monthly is probably sufficient in general as it will give those responsible for systems time to apply patches before you drop new info on them. It really depends on your patch cycle though. Caveats to that are:

- after disclosure of vulnerabilities with high impact, you probably want to make sure whether you have them present or not
- when new assets come online, scan them specifically (may require some communication between provisioning and security)

Also, make sure the teams responsible for deployments are updating golden images so you don’t just keep reintroducing the same vulns over and over again.

That’s probably as much advice is fit for thumb typing.
-- wdf//CISSP, CSSLP

Depending on the network area and risk there are daily, weekly, monthly and annual scans.

I have heard many companies only do quarterly or annual scans. This seems insufficient by today's events.

But it would depend on the industry and the size of the company and regulatory impacts.

Many companies do a variety of each type.


Mike Patten


Community Champion

The frequency should depend on what you do with the results. If you do nothing with the results then you really don't need vulnerability scanning at all because nothing is going to get done. Do it once a year to satisfy the check box and meet the requirement. I have been places where they did it daily, every day. I argued it was too often because we weren't doing anything with the results.

Daily - if you have a team working the results and you should expect to see the list shrinking unless new vulnerabilities are released.

Weekly - if  you have someone doing some remediation activities during the week. If nothing was done there is no need to do another one UNLESS you want to show the cost of inactivity by an ever growing list of vulnerabilities.

Do you get my point here?

I argued that we should go to monthly or on demand scans. My plan was this:

1) Do a scan.

2) Deliver the results to the remediation team.

3) Perform remediations.

4) Ask for another scan.

5) If no remediation work was done at the one month mark, do another scan to show how much worse it was getting with no activity in remediation.


At one agency we had a requirement for daily scans and no one wanted to change it because it had been written into the rules, so they kept doing them. At the beginning of every quarter they would do a scan and make it a POAM (Plan of Action and Milestones - i.e. list of vulnerabilities that had to be closed in 3 months). The sys admins would wait until 2 weeks before the deadline and then work like mad to try and close all of them. We would usually have to ask for an extension. At one time we had 3 of these POAMS open which meant some vulnerabilities had existed for 9 months. So was the daily scans helping to remediate the vulnerabilities? No, it just kept adding to the total. Which the scan team would then get mixed up with the new POAM results and send conflicting reports of what had to be closed for which POAM.


So my recommendation is to understand why you are scanning. Is it for knowledge to force an action (i.e. we do not have enough sysadmins to work vulns, we do not have an adequate security program, etc)? or is it to check a box? Once you understand the why you will know how often to do it.

Community Champion

Another thing to consider if you do scans of more than a month interval. Do you have someone keeping up with the latest vulnerabilities so you can jump on the very critical ones (i.e. the ones that get into the media)? You r interval will depend on your resources. You can use shorter intervals to show the need for more resources.

Viewer II

I guess this really depends on your environment. I work for a natural gas pipeline company, and we do vulnerability scans on a quarterly basis. There are some exceptions to this rule.


For example, we recently made some major changes to our infrastructure and firewalls in 2 weeks. As soon as we completed the changes, and we had a stable network, we did an external and internal vulnerability scan. 


The other exception is new applications. While they are still in development, the applications are heavily scrutinized.


You may think that a quarterly basis is a long time between scans. We run two separate productions and DMZ networks because of NERC requirement, but the auditors seems happy with scheduling.


If you're in a large or enterprise environment, then I would consider more frequent scanning.

Community Champion

For some, it depends on security/maturity level.  


When a company is just starting out there is a lot of clean up involved and that is just non-authenticated scanning.  The frequency might be monthly or quarterly during this catch-up phase.


Not only will you find things to patch but also you will find inventory you did not know you had.


Later you can add authenticated scanning at the same monthly or quarterly interval.  


Once the authenticated scanning is looking good, start reducing the interval to match your policies and/or risk tolerance.  


Currently, we do a full authenticated vulnerability scan every week on every network.  The numbers from these scans can startle some.  


For example, Microsoft/Adobe/Google/Oracle patches might take up to 10 days to deploy but the scanner will spot them missing in one day.  So for nine days, the score might be very low until these are deployed.  That is because a single HIGH vulnerability missing from all of your systems (1000/1000) systems will pull the score down fast.


The way I describe is like a slow wave crashing on the shore and leaving debris behind.  You go out and pick up the mess and a few days later another wave arrives.


Besides the weekly scans, everyone is empowered to do On-Demand Scans.  As systems are being built, a few scans are required before it will ever go into production.  Follow procedures and you have no new surprises.  


Get caught up and stay caught up is the method we use.





Community Champion

On thing that I think will happen is that we'll see it getting 'faster' or at least we'll see the 'frame rate' increasing on patching.


We had an interesting example of anticipation from Microsoft in Windows 10, the self proclaimed last ever version of Windows.


Essentially it's still shipping new versions of Windows under windows 10, just one or two a year rather then was we'd become used to before Win10. The most pertinent data point here is that Windows 10 has a habit of breaking third party disk encryption in it's Anniversary and Creator's updates. Never used to be the case, and by  electing to essentially reinstall on much faster cycle Microsoft can be much more agile, address issues more quickly etc, etc  - IMHO  it's helps that innovation around PCs has slowed as well, as this means less exiting goes of hunt the driver.


So why's that relevant? Well if you think of the OS as the slow tick in the system and real updates are much quicker at the OS and leave less cruft behind then application releases, maintenance packs and hot fixes etc become quicker to deal with this, and then there is a more of point in scanning more frequently. In fact maybe a lot of component systems in on prem-data centers follow the 'Cattle not Kitten's' paradigm and don't live long enough to be scanned unless you are very quick, and you look at compliance in base images and weed out deviating systems.









Viewer II

As the other answers have already said - it depends.


However, one key thing to keep in mind is the difference between scanning and reporting.


If you are able to scan weekly, with no significant impact on the target systems or intermediate systems (e.g. fragile firewalls) then set this weekly.  


You can then set the reporting period to match the compliance framework whether this is monthly or quarterly, yet have confidence that your data is not more than 1 week old should you need to query it.  It will likely be easier to change the reporting schedule than scanning schedule.


In my environment with have a mixture of daily and weekly scanning, with weekly or monthly reporting and action tied into patch schedules.


Don't forget to include processes to catch systems not scanned for whatever reason, e.g. authenticated scan failures, new systems added, BYOD if applied.


And try and build the business case to move to use scan agents on target systems where supported.

Newcomer II

Agreed. I think sometimes, we tend to look for a "standard" answer when there really isn't. Like anything else security related, monitoring/maintenance is a very specific task that needs to be tailored to the environment. And even then, there may be multiple standards depending on the types of systems they may be using. 


In the end, there are a lot of factors that goes in to scheduling scans. For some places, certain systems may be static enough and the criticality of information stored/processed is such that an annual scan is sufficient. And the flip-side may also be true where you'll need to do scans every few days because how its used. 


Personally, the answer comes down to - it needs to be as frequent as the risk assessment dictates.