One challenge I experience in assessing risk in applications and APIs is that I cannot know what a more expert, determined, or funded bad actor can do with even the most trivial of APIs. Developers often say to me, "there's no way they can get at other tables from this API". I just find that hard to accept. I can't explain to them how it could happen. They may be correct and there is little risk, but in my mind that doesn't mean that it can't be done and won't be much more serious than we think. All it takes is one oversight, and it's my job or worse.
"Developers often say to me, 'there's no way they can get at other tables from this API'. I just find that hard to accept."
I frequently say, "A claim is as good as its veracity." If a person has the wherewithal to make such a claim, then they should have the commensurate data which proves it. They're doing assessments through fuzz testing, input sanitization. They have a design practice which builds security into their information. They hire third-party orgs who do all these things. And there's a paper trail from all this work being performed.
It has been suggested that anxiety and its related appraisal styles may contribute to development of paranoia. The assessment and management of the risk of a person with a mental illness causing harm to another is an extremely important part of psychiatric practice. SurePayroll
One challenge I experience in assessing risk in applications and APIs is that I cannot know what a more expert, determined, or funded bad actor can do with even the most trivial of APIs.
Yes, this is a challenge. I've had the experience of coming across something that I sense is a problem, but I can't quite quantify it or express it upfront. It eventually struck me that I was too focused on the probability side of things. What's an interesting exercise is to compare your loss expectancy against the original asset value. Think of it as measuring the risk of your risk assessment. If you have something with a high asset value and low ALE, that should be an alert that you have a lot riding on an assumption. Keep in mind too that developers and sysadmins don't always realize what their work means. If instead, you say, "Your code is protecting a $1 million asset" it might get them to re-think their cavalier dismissal based on probability.
This is really good feedback. You're right. The value of the asset along with "commensurate with risk" should remain front of mind. As a former developer, I often get hung up on details (flaws).
@Judyblanks It's possible I wasn't clear in my initial post. I was asking if I was right to be concerned with what I felt was a potential flaw or risk in an application despite not being able to prove that the flaw could be exploited.
Judy, you know what heightened my paranoia? Studying for CISSP.
One day I was just frolicking through the garden, thinking, "Man, I wonder how exposed my employer actually is." A few weeks later, every subsequent page-turn of the CBK was yet another reminder of the unmitigated risks left to assess.
@cclements and that's what it's all about, really. Flaws. But you gotta know if it's software flaws, API flaws, mitigating control flaws, or flaws in governance which determine how much protection your assets require. So who's your POC that can answer these questions? 😄