cancel
Showing results for 
Show  only  | Search instead for 
Did you mean: 
cancel
Showing results for 
Show  only  | Search instead for 
Did you mean: 
meparlez
Newcomer I

Explaining Step 1 of the NIST SP 800-37 Risk Management Framework

Does anyone have any good experiences to share where you were successful at breaking down the Categorization step of the 800-37 RMF (step 1)? Or any advice on ways of explaining it in layman's terms?

 

When I break down the "know what you have" and "create an asset list/inventory" my audience is on the same page, but when I get to "know and classify your data types" they get lost and don't know where to start.

 

Thanks in advance. I'm new to this community, but look forward to being a part of it.

 

My background: 4 years doing penetration testing in web applications and 3 years as a manager for a security testing program where we conduct vulnerability discovery and risk analysis of various assets in various contexts.

11 Replies

I'm sure you've already spend the time with FIPS 199/200.  For me, when I'm working with data owners, I break everything down into Confidentiality, Integrity, and Availability.  And, yes I do mean everything.  For every group of information type, you can set a High, Med, Low for CIA.  When you start to lay it out, the logic for how you will design a protection scheme will flow out and be obvious and defensible.  By the way, I highly recommend CNSSI 1253 which puts this same logic to 800-53.

 

Best of luck.

Early_Adopter
Community Champion

I've never formally used SP 800-37 so this might be way off the mark, however for classification efforts with users(tied in with DLP/tagging projects) I generally go through the following steps to try to get people started:

 

  • Delivery a short presentation that shows the regulations, threats to the audience(mostly to make it personal - lot's of examples from the news);
  • Have them talk about how this relates to their data, their customers, their work;
  • Then get them to provide ideas on what their data classification should look like. I generally try to get them to work with Venn diagrams on flip charts as they can share ideas easily and aid visaulization. I try to get them to think about impact first and work on the basis of high, medium and low buckets - then if necessary add more levels or an extra dimension in the form of scopes around company, role, and project. Work together in groups of three or four and then present back to the group.

Once you have the output of that you can get them work around 'bucketizeing' their data, and have them cascade this out.

 

There's a lot of good info here:

 

https://www.gov.uk/government/publications/government-security-classifications

 

Basically, the UK decided that three levels were enough.

 

 

StFeuillien
Newcomer I

MeParlez, I have to agree with ContractorMatt and think you may find the charts in Appendix D most useful.
meparlez
Newcomer I

Thank you for the pointer to CNSSI 1253. It has some great tables that I'll study, and I'm also curious to see how the Appendix E parameters compare with my organization.

 

Do you find that the data owners you've worked with understand the difference between limited, serious and catastrophic (relative to impact)? I've been successful with examples like "if this data were leaked on the Internet, or if an exposure resulted in a Washington Post headline" (they understand those to be more serious or catastrophic than limited) but I've found hitting the lower watermarks to be more difficult. Could you offer any insight into how to work through those conversations?

 

Thanks in advance.

meparlez
Newcomer I

Thank you for the resource link. Very interesting content as I've never studied categorization from the UK's perspective. 

 

I can see how working on impact first would be effective, and making it personal. Are you kind of asking your audience to ask themselves if a particular event in the news could/would happen to them, and then using that as a starting point?

Deyan
Contributor I

Hey, 

My 2 cents on this topic:

 

Classifying the data is tough indeed, I suggest that you do not rely 100% on the data owners of what would be the impact for the company if their data's C, I or A gets compromised. You better interview them, get their opinion but do the final conclusion yourself (having in mind you are the security expert there). My suggestion is after you complete step 1 (inventorying your assets) and now you have a list of all systems, storages, databases, applications etc. you start applying filters like:

1) Sensitivity of data (you can use NIST's FIPS 199 or 800-37 criteria for low-moderate-high) 

2) Amount of data

3) Impact for your company (reputational; financial; employees' health) if that data is compromised (Conf; Integ or Avail.)

4) PI or HPI

and so on and so forth - you can apply customized filters that you deem - appropriate for your company. At the end, hopefully you would have a list of your most critical assets that you need to address first.

Early_Adopter
Community Champion

Agree with @Deyan in that you are not trying to get the Data Owners, Custodians and users to ultimately decide on impact for you, It's more of a scenario-based approach using impact and worse case thinking.

 

@meparlez The impact is less abstract concept than risk, especially if you have some good examples of silly things people do, my favorite is 'Project Bookend 'below, authority figures are always interesting

 

Bank of England email's secret Brexit contingency project to the Guardian This is genuinely hilarious, and I figure a light-hearted dig at the British as an Icebreaker probably goes down well on your side of the pond?

This one might be topical in the US

 

Equifax, Yahoo, Coincheck there are a lot of examples to go round. One practitioner even wrote a big book if you are stuck: 

 

https://www.amazon.com/Privacy-Breaches-Aware-Protection-Experiences/dp/9814721980

 

And of course, GDRP and privacy, in general, are interesting. the aim is more to get the team energized and collaborating, rather than feeling powerless or that it's in the way. Then you can enlist their help willingly on thinks like data inventory. DLP projects I've found it's only certain teams that truly appreciate what some data can mean, and perhaps the won't tell you about it.

 

Once you get an idea of the content they have hope they are using DLP of some sort you can send scanning and tagging and inventorying everything. If you don't have any idea and no tools you might just label everything 'internal only' or similar with an expiry and have users classify it properly when they use it.

 

 

 

 

Frank_Mayer
Contributor I

NIST covers each topic in depth on their site.  For categorization this link provides a wealth of details https://csrc.nist.gov/Projects/Risk-Management/Risk-Management-Framework-Quick-Start-Guides/Step-1-C... This site provides tips, fact sheets, and most important people you can contact for details.  One twist is that the DoD does not use the High Water Mark Categorization method when categorizing a system.  The prepare step BEFORE categorization is also key to making sure that the information needed to perform categorization for a particular system or mission application is available. One issue with this resource is that many fact sheets are still draft but if you go to the actual Publication, it is not draft. You also need to review FIPS 199, https://csrc.nist.gov/publications/detail/fips/199/final 

 

Quote from this publication sums it up "The generalized format for expressing the security category, SC, of an information type is: SC information type = {(confidentiality, impact), (integrity, impact), (availability, impact)},"

 

NOTE: This DoD guidebook is very useful when working RMF for DoD systems since as noted above the DoD has some variances in its approach to the categorization step. refer to https://www.dau.mil/tools/Lists/DAUTools/Attachments/37/DoD%20-%20Guidebook,%20Cybersecurity%20Risk%...

Respectfully,

Francis (Frank) Mayer, CISSP EMERITUS
CraginS
Defender I


@meparlez wrote:

Does anyone have any good experiences to share where you were successful at breaking down the Categorization step of the 800-37 RMF (step 1)? Or any advice on ways of explaining it in layman's terms?

Francisco,

 

A team I was on worked this very problem just over a year ago at a U.S. government department. Our team concluded that doing categorization "right" requires a multi-discipline team, and commitment from organization leadership that will ensure the needed people actually take part in the process.

Although aimed at government systems, I believe our lessons learned can be applied in other enterprises.

We used a core team of internal experts and external consultants (I was one  of the consultants) to set up the process, then members of that core team led selected stakeholders for each system under review to carry out the categorization process.

First, you need to use not only SP 800-37, but also SP 800-60, Vols 1 & 2, Guide for Mapping Types of Information and Information Systems to Security Categories. In 800-60 you find usable definitions with amplifying discussions of each of the impact levels (Low, Moderate, High), that the stakeholder team will apply.  SP 800-60 also provides an extensive set of information types to consider as you analyze each system.

Next, for each system under review, you need to identify the relevant stakeholders, and get both them and their bosses to commit to taking part in the process. That commitment must include taking part in at least two, possibly three, live meetings to walk through the actual categorization process. The stakeholders group should include at a minimum, the system owner (responsible for maintaining and funding the system), data owners (those with authority to define the data types), representative system users, and, if possible 2nd tier users, that is users of linked systems that pull data from or send data to the system under review.

We learned from a pilot study that the stakeholder team will need Just-In-Time training on the reasons and requirements for the RMF process (keep it short and focus on mandated approvals and funding impacts), the nature of the Confidentiality-Integrity-Availability (C-I-A) triad, and the meaning of each of the three impact levels (L/M/H). We developed worksheets, shared with the stakeholder team in advance, to walk through the process.  We also learned that asking the stakeholders to independently complete the worksheet forms was not effective; we needed the real time live interaction of the stakeholders with the meeting moderators from the core team to really get the questions answered.

The master worksheet listed the complete set of information types from SP 800-60. Prior to tackling a specific system, the core team worked with an expert on the system to identify only the ones in that list likely to be in the system, but we left the full list visible. At the first meeting with the complete stakeholder team we reviewed the need for categorization, the nature of C-I-A, and the specific definitions for each of the three impact levels. We also described the two-dimensional use of the High Water Mark principal, in that it is used first within each security goal  and only at the end might it be rolled up into a single value for the system.

We then walked the stakeholder team through the information types, having them confirm whether each likely type was present in the system, then applying the impact levels for each security goal to that information type. We intentionally did NOT ask them what the impact level as at this point. Doing so is guaranteed to give you "gut-feel" impact levels instead of defensible values based on the definitions.

Instead, the prompting questions were as follows:

Confidentiality: What would be the result if someone not authorized to see this data got to the information?
Integrity: What would be the result if authorized users got this data from the system but the data was not accurate?
Availability: What would be the result if authorized system users could not get to the information when they needed it?

Only after obtaining the potential results statements did we match those statements against the three impact level definitions and descriptions. We completed the results to impact level step immediately after the team confirmed the results statements in each case.

As we completed the table we recorded both the results summary and impact level justification based on the definitions for C, I, and A, on each data type. This gave us a fully auditable record of exactly how the system category was obtained. To save time, we implemented part of the high water mark principal in that once a security goal was shown with impact High, we no longer asked about that goal in the later data types. For completeness, it would be nice to have all three done for every data type, but your stakeholder team has been puled away from their primary jobs for this effort, and you want to get them back to their main jobs as quickly as possible..

The meetings do not have to be face-to-face, but must be real time together, not asynchronous.  We repeatedly saw stakeholders bring up new, relevant information as they heard others in the meeting describe the results of loss of C, I, or A.  Simple voice-only teleconference can work, but much better is to have shared screen view, with one of the moderators acting as scribe, marking up the worksheet for all to view. This includes recording the results statements and the impact justification statements as the work proceeds.

Finally, we resisted allowing the final High Water Mark step of rolling the three goals into a single system impact level. That is because in later stages of the RMF process, understanding how much protection is needed for confidentiality, integrity, and availability is necessary will drive the selection of controls to do so. This can be a big impact on overall costs of implementing the controls.

Biggest concern: buy-in by both the stakeholders and their bosses is absolutely essential. Otherwise, they will not take the time to go through this process, and will instead try to pencil whip the answers.
Secondary is that some system owners will try to push a HIGH impact level just to support how "important" their jobs are, but in the process cause significant unneeded expense in implementing overly restrictive security controls.

 

(c) 2019 D. Cragin Shelton

 

The above essay can also be found on my blog at

https://cragins.blogspot.com/2019/05/system-security-categorization-under.html

D. Cragin Shelton, DSc
Dr.Cragin@iCloud.com
My Blog
My LinkeDin Profile
My Community Posts