cancel
Showing results for 
Show  only  | Search instead for 
Did you mean: 
cancel
Showing results for 
Show  only  | Search instead for 
Did you mean: 
cclements
Newcomer II

How paranoid should one be when assessing the risk?

Colleagues, 

 

One challenge I experience in assessing risk in applications and APIs is that I cannot know what a more expert, determined, or funded bad actor can do with even the most trivial of APIs.  Developers often say to me, "there's no way they can get at other tables from this API". I just find that hard to accept.  I can't explain to them how it could happen. They may be correct and there is little risk, but in my mind that doesn't mean that it can't be done and won't be much more serious than we think.  All it takes is one oversight, and it's my job or worse.  

 

 

Tags (1)
8 Replies
ericgeater
Community Champion

"Developers often say to me, 'there's no way they can get at other tables from this API'. I just find that hard to accept."

 

I frequently say, "A claim is as good as its veracity."  If a person has the wherewithal to make such a claim, then they should have the commensurate data which proves it.  They're doing assessments through fuzz testing, input sanitization.  They have a design practice which builds security into their information.  They hire third-party orgs who do all these things.  And there's a paper trail from all this work being performed.

---
Judyblanks
Newcomer II

It has been suggested that anxiety and its related appraisal styles may contribute to development of paranoia. The assessment and management of the risk of a person with a mental illness causing harm to another is an extremely important part of psychiatric practice. SurePayroll

JoePete
Contributor III


@cclements wrote:

 

One challenge I experience in assessing risk in applications and APIs is that I cannot know what a more expert, determined, or funded bad actor can do with even the most trivial of APIs. 


Yes, this is a challenge. I've had the experience of coming across something that I sense is a problem, but I can't quite quantify it or express it upfront. It eventually struck me that I was too focused on the probability side of things. What's an interesting exercise is to compare your loss expectancy against the original asset value. Think of it as measuring the risk of your risk assessment. If you have something with a high asset value and low ALE, that should be an alert that you have a lot riding on an assumption. Keep in mind too that developers and sysadmins don't always realize what their work means. If instead, you say, "Your code is protecting a $1 million asset" it might get them to re-think their cavalier dismissal based on probability. 

cclements
Newcomer II

This is really good feedback.  You're right.  The value of the asset along with "commensurate with risk" should remain front of mind.  As a former developer, I often get hung up on details (flaws).

cclements
Newcomer II

@Judyblanks It's possible I wasn't clear in my initial post.  I was asking if I was right to be concerned with what I felt was a potential flaw or risk in an application despite not being able to prove that the flaw could be exploited.

ericgeater
Community Champion

Judy, you know what heightened my paranoia?  Studying for CISSP.

 

One day I was just frolicking through the garden, thinking, "Man, I wonder how exposed my employer actually is."  A few weeks later, every subsequent page-turn of the CBK was yet another reminder of the unmitigated risks left to assess.

---
ericgeater
Community Champion

@cclements and that's what it's all about, really.  Flaws.  But you gotta know if it's software flaws, API flaws, mitigating control flaws, or flaws in governance which determine how much protection your assets require.  So who's your POC that can answer these questions?  😄

---
DHerrmann
Contributor I

A good guide when confronting questions like this is to fall back on your company's risk appetite. When it's published, they're letting the company know what types of risks are acceptable to them and which are to be avoided/treated/insured.
Failing that, it's important to look at the inherent risk of the application - an app used in a hospital to track patient health history has a completely different risk profile than a stock trading platform versus a game like Words With Friends.