cancel
Showing results for 
Show  only  | Search instead for 
Did you mean: 
cancel
Showing results for 
Show  only  | Search instead for 
Did you mean: 
pnk2017
Newcomer I

Security Metrics of First and Second Level Controls

Hello All,

 

I'm trying to refine our security controls at my workplace between IT Security (1st Level controller) and Information Risk (IR)  Security (2nd Level Controller).  As second controller, I'm called to provide security metrics to our risk management committee on a regular basis.

 

For instance, provide the number of non-compliant servers in our park and number of non-compliant settings over the total number of settings to be applied.

 

Could someone advice or guide me how this should be done between 1st level controller and the second level controller. What types of metrics should each level conducts?

 

Thanking you for your kind input.

 

Patrice

 

8 Replies
Deyan
Contributor I

Hello,

This is pretty holistic to be honest. Your assets classification (1st and 2nd controller) might be tricky. I suggest you put down a security baseline for each asset - then assess the security controls that are applied to it and report on the ones missing. For example - take an asset - server, the controls I have are:

- latest patches

- hardening

- local firewall

- network firewall

- security group policy applied

- audit features enabled

 

and you have 6 security controls to report - in place/not in place. Another example - mobile phone:

- encryption

- mandatory locking features

- remote wiping capabilities 

and so on... - so try to establish a baseline security for each asset and report on that - hope that makes sense.

 

Baechle
Advocate I

Patrice,

 

Basically what it looks like you are describing is the two-tier approach of implementer/verifier.

 

Really this boils down to determining what your sample rate is going to be.  You can either chose 100%, or less than 100% through some kind of sample picking method.  The closer to 100%, the more work you and your Level 1 folks are going to be doing.

 

The next step is to determine the granularity of your reporting.  You can select binary (compliant/non-compliant), you can select a percentage (95% of the controls were implemented), or you can selected tiered (e.g. 100% of mandatory controls, 95% of critical controls, and 70% of serious controls).  The easiest method that I found is to have a Spreadsheet of the controls, tabbed by the asset.  You and Tier 1 go over each of the controls with the Tier 1 person both explaining the control and expected value during an interview, and then demonstrating the current setting to you.  When you are done, you can create several levels of reports out of the spreadsheet data.

 

Finally, the format that I found most acceptable to senior management is a "stop light chart".  You name off your critical systems and networks and provide a Green icon for compliant, yellow for mostly compliant or soon-to-be compliant, red for non-compliant, and black/white (depending on your background) for unable to comply.  Next to each is a summary of why that system is represented with that icon.

 

Sincerely,

 

Eric B.

 

 

pnk2017
Newcomer I

Hi Deyan,

 

Very grateful for your prompt reply.  All that you stated makes sense.

 

That's what is actually setup at my workplace, as you stated;  and  I usually report on the number of non-compliant (not in place) controls applied to our assets  based on CIS benchmark or other baseline as use in my workplace.

 

However, I forgot to mention that we (Information risk team) use to do the task of first and second level controller at the same time.  But, this is going to change as  IT Security Team is going to act as first level controller. Basically, all the metrics run by my team - report on the number of non-compliant assets will be run by IT Security.

 

I just want to have a clear cut distinction of work/task to be conducted between the first level controller and the second one.  

 

 

Thanks and Regards

 

 

 

 

 

 

 

pnk2017
Newcomer I

Hello Eric,

 

Thanks a lot for your reply.

 

 

It looks like a kind of implementer/verifier.  However, in my case, it is us (Information Risk Team) which sets the baseline, inclusive of the expected value, to be applied by IT on our assets.  Then, It will be up to IT Sec to verify if these settings have been applied by IT Service, as recommended by us.  

 

We have two different IT team: IT Service and IT Security.   The service team is going to configure an asset as per our baseline.

 

We usually  rely on 100% mandatory controls and we use automated tool to scan these assets against our benchmark on compliant/non-compliant count and follow up on action plan by IT.  By the way, my boss makes use of the "stop light chart".

 

As stated in my previous comments, we would like to make a clear distinction between the run of IT Security and our team in terms of metrics and controls to be setup.

 

Thanking you for your time 

 

Patrice

 

 

Baechle
Advocate I

Patrice,

 


@pnk2017 wrote:

As stated in my previous comments, we would like to make a clear distinction between the run of IT Security and our team in terms of metrics and controls to be setup. 


Then, I guess I don't understand what you're attempting to articulate here.  Please allow me to describe a similar scenario, and then tell me what you are attempting to do again.

 

-------------------------------------------------------------------------------

 

I was the acting Command Information Assurance Manager for one Agency, and a DAA Staff Engineer in two prior positions.  My primary job was to assist the Command Staff IT Director of Plans and Policy or the DAA understand the current landscape of compliance across 4 Domains, inclusive of the networks and systems.  My secondary job was to assist the IT Operations group navigate the process of interpreting policy, identifying required deviations from baselines, and applying for control waivers when necessary.

 

The problem here was that the Agency HQ had to be physically located somewhere.  That means that the Agency HQ Staff were collocated with one of the field office IT Operations offices.  In both prior positions, the IT Operations office believed that the HQ Staff administrative oversight office was actually the computer security implementer for that facility and abandoned those duties.

 

We had to sit folks down and go over policy, organizational charts, concepts of operations vs audit vs oversight, and the requirement of policy and compliance activities to be independent of operations and maintenance activities.

 

Is this the thing you're dealing with?  Are you trying to find metrics that identify why these two activities are different?

 

Sincerely,

 

Eric B.

pnk2017
Newcomer I

Hi Eric,

 

That's exactly what we are trying to achieve with the guys from IT Security team with "

concepts of operations vs audit vs oversight, and the requirement of policy and compliance activities to be independent of operations and maintenance activities." 

 

We have gone through defining RACI matrix to distinguish the types of responsibilities, operations/tasks between operations, maintenance, and oversight of the various actors involved.  We use to do a lot of operational tasks and lesser focus on our core, which is risk assessment and ensuring compliance to our policy, but this has changed and IT Security has started to taking the lead in some activities.  It started about a few months ago and slowly but surely, the IT Security team has been operated as it should now.  We have come to an agreement that we will oversight the controls implemented by IT Security (Patches, hardening, Open Shares, etc).  

 

To your question "Are you trying to find metrics that identify why these two activities are different?"

 

Yes in a way, but I would like to know how to effectively run the different metrics without repeating the ones that will be carried out by IT Security (first level) and us (2nd level).  IT Security team provides to us a monthly report on the number of assets that are compliant and non-compliant in terms of the various controls in place and their action plan to tackle the non-compliant.   

 

As second controller, I usually take the same figures returned by IT Security team in the monthly run and report same to my hierarchy - if they are satisfactory.  But I see this is a kind of duplicate....is there any way how we can carry this task without duplicating the metrics...like challenging?

 

I know that it is quite difficult to make it clear what I'm trying to achieve.  But this is quite new for me.  I hope that you will understand.

 

Thanks again for your contribution which I really appreciate

 

Patrice

 

 

 

 

Baechle
Advocate I

Patrice,

 


@pnk2017 wrote:

That's exactly what we are trying to achieve with the guys from IT Security team with "

concepts of operations vs audit vs oversight, and the requirement of policy and compliance activities to be independent of operations and maintenance activities." 

 


As second controller, I usually take the same figures returned by IT Security team in the monthly run and report same to my hierarchy - if they are satisfactory.  But I see this is a kind of duplicate....is there any way how we can carry this task without duplicating the metrics...like challenging?

 


 

Ok.  I think I understand better what you are looking for.

 

In the scenario that I provided, we were an oversight office that was responsible for multiple field offices.  So the metrics that we provided to our leadership were more of a roll-up of several IT Security Operations groups with trend information to rack and stack their performance.  

 

The value-added service that we provided was the result of audits contrasted against the IT Security Operations self assessment.  So for example, the IT Security Operations group would report 99% compliance on patches and controls.  That quarter we would go in and evaluate and find that they were really at 94% compliance.  So the base metric was the deviation (5%) in control compliance from the self-report and our audit.

 

We then broke that down to identify what that 5% was and why it was missed.  For example, (A) there was an administrator that departed who's sole purpose was maintenance of a system that was now not being taken care of.  (B) A system was implemented but not documented, so when new administrators arrived they did not continue maintaining the undocumented system because they didn't know it was there.

 

Finally we would show trends in those deviations to identify if they were getting better, worse, or generally the same.  We would support that in our leadership brief with an analysis of the root causes to determine if there was a problem that wasn't being addressed.  For example, (A) that dedicated administrator position for a unique system has gone unfilled for 6 months, or two quarters.  Or, (B) Systems are being maintained but not documented even after documentation was identified as an issue, so the next refresh of staff repeated the same mistake.

 

Since we had multiple field offices with their own IT Security Operations, we also identified groups that were doing things well or poorly and set up opportunities to cross mentor each other.  I personally don't know if your operation is big enough to do that.

 

Hope that helps.

 

Sincerely,

 

Eric B.

 

 

pnk2017
Newcomer I

Hi Eric,

 

 

Thanks a lot again.  I believe that I can try this way to see if there is any deviation from what IT are providing to us and determine the root cause and any further issue to be resolved.

 

Kudos

 

Thanks and Regards

 

 

Patrice