According to a PWC report, the C-Suite Executives are the greatest risk? What do you think?
Do you concur or do you have another perspective?
This quote from the article says it all:
"The cybersecurity team also often takes the blame for a breach when in fact the root cause is usually traced back to a user that disregarded one cybersecurity policy or another. Sadly, that end user often turns out to be a C-level executive that should have known better."
In reality, C-level executives may not always know better. It is often the layer of management below them that is not fully comprehending the gravity of allowing vulnerabilities in production systems and allowing exceptions to policies. And it gets worst, hackers know the odds are that a new release has an approval to go live with vulnerabilities, yes people that happens. There is a lot of application technical debt out there just waiting to be discovered.
I once had a Sr. VP click on the attachment in an email entitled "ILOVEYOU". When asked if they knew the sender, the reply was "NO, I was just curious:"
Totally agree that middle management are sometimes put corporations at risk more than the C-Suite. C-Suite become involved after a breach and typically open the wallet for security spending.
It seems much of the article is trying to say the C-suite is now taking security seriously. But I think the larger point is still being missed. Security is a cultural thing, and it can take years to change corporate culture. But if the problem is the C-suite hasn't prioritized security, the other half of the equation is the security folks haven't done a good job either getting that message across or moving into senior management roles.
That said, I think senior management generally is getting the idea or willing to listen now. There's still work to do, but where I see the biggest risk or gap is with young employees. They come into the workplace with certain bad habits and expectations. Maybe 10 or 20 years ago, there was a need to integrate information security into business school. Today, it has to be done at the elementary schools.
Perhaps, this is where AI fits in, indicating to a C-Suite member, that potentially if you chose this decision, it will have the following consequences or did you know, if you take this particular route, you open up a whole heap of pain for yourself, the organisation and potentially a technical debt, which will keep you spending X number of dollars for the x number of years etc.
They need to understand the consequences of their decisions, hindsight is not good enough these days, we need to predict the actual reality from the existing lessons learnt everyday by a lot of organisations and practitioners. We need to commence gathering this data, and using to protect and educate our C-Suite to understand risks associated with their decisions, before they are allowed to throw the big switch to actually make it happen. Scenario based cybersecurity played out, before they are allowed to make that decision.
Rather like Tanker captains put in a miniature ship, within a canal, and taught real live scenarios and to learn the consequences of running aground for instance - I know it does not quite match, but you get my drift.
But isn't this the dilemma/challenge? People want an app to think for them. We live in an age of TL;DR - "It's your fault that I am too lazy to read." It's this sense of intellectual entitlement that has led to a willing suspension of common sense. We have conditioned people to follow the crowd or download the app rather than to think. It's not that people are incapable of understanding risk. They just choose not to, and I see that problem prevalent throughout the workforce, not just at the c-suite level.
And think about how attackers shift their attacks to avoid the latest security thing.
If I am a burglar I try to break into your house. The battle usually goes like this:
Bad guy (BG): Try the door handle. Finds it unlocked, goes in and steals something.
Home Owner (HO). Installs locking door handles.
BG: Tries handle and finds it is locked. Breaks Window, goes in and steals something.
HO: Repairs glass. Installs surveillance system with glass detection sensors.
BG: Gets a ladder and breaks in through second story window.
HO: Puts sensors on all windows and doors. Gets a dog.
BG. Stakes out your house and learns the security code. Brings a pack of meat. Goes in and steals something.
HO: Puts up a sign that their neighbors house doesn't have locks.
If we shift the responsibility to an Artificial Intelligence (AI) method, a motivated attacker will just change their methodology and attack the new thing.
I am attacking the psychology behind the attacks. In our latest phishing test I asked all of the clickers to respond to me on "why" they clicked and it breaks down into roughly 5 psychological areas of attack:
1) Trust - Fooled by a logo or something else they have been taught to trust.
2) Inattentive - Just weren't paying attention
3) Panic - Account lock out, fake charges to their account, a login from Russia, etc.
4) Irresponsible - Mixed personal use with business use
5) Wall of shame - Lying - They lied to me about the reason they clicked. i.e. The dog jumped on my arm while I was just about to delete it. I was about to delete it and I sneezed, etc. They hide behind the "wall of shame" because they don't want to admit that they were fooled.
I can create awareness campaigns around those to reduce our click rate.
I think the C-Suite needs to understand the psychology of the attackers and how they prey on our user's psychological weaknesses and allow us CISO's and Information Security folks to use that knowledge to help protect our environments.