Cybersecurity industry best practices keep allowing breaches
This is an opinion piece written by Prof. Allen Gwinn: "a professor of Practice in Information Technology at the Cox School of Business at SMU Dallas. He has well over four decades of experience with systems, networks, data and other cyber resources."
Even though I disagree with some of his opinions, I still believe it's a good piece to reflect on with some of the ways we do or think about our security programs. For example, better communication on what our security program goals are. Most people assume it's all about prevention.
Thoughts from the Community?
@tmekelburg1 Interesting, However, he misses many major issues:
1) You cannot protect yourselves, if you cannot detect or make it visible to the staff.
2) Is there good leadership and governance with good understanding of risk management, asset management and the knowledge of what is critical to the organisation i.e. what will cause a major disruption.
3) Incident Response Plans, do they have adequate playbooks for the various scenarios and have they walked through them, and learnt lessons whilst doing so?
4) Have they put themselves through a Cybersecurity Command Centre experience with the entire team?
5) Do they have owners for each of their assets, good evaluation, and necessary controls to protect them adequately?
6) Is there good BCP and DR plans, and have these been tested regularly and frequently?
7) Is the business strategy and objectives aligned with those of the security strategy and related policies?
I am sure others can add many other items to the list?
Regards
Caute_cautim
@tmekelburg1 Agreed, there needs to more incentives higher up the command chain to ensure organisations, do take measures to protect themselves and not sit on their laurels, whilst ignoring what their respective CISO's are telling them. Perhaps, more emphasis on Contributory Negligence and higher penalties.
Whilst some countries tend to go quiet, and attempt to go under the radar so to speak.
Regards
Caute_cautim
I always disliked the term industry best practices. The goal should always be to make them appropriate and relevant, which necessitates understanding the organisation, it's people, processes, technology, culture, strategy etc.
One of the fundamental problems is that security allows "exceptions" for business use cases that present risk. Now, if we had better detection and prevention capabilities in our network architectures we could control and contain risk, but often that does not happen and it goes unchecked. Those exceptions create bad security patterns and precipitate a false sense of security, often without a supporting technical threat model or analysis. Look at what happens today with traditional network security users and devices authenticate one time then they get free roaming access to networks and move laterally. We need to go back to the foundations of computer science - back to mainframe security principles where we control users and their every action. Think Zero Trust, that is the "new" best practice that will prevent the spread of ransomware and stop data exfiltration/breaches from happening.
@AppDefectsThere will be a lot of mainframe specialists, who will very pleased with your approach and going back to basics. I am sure a lot of RACF specialists will also be very pleased too.
I agree, we have been pretty soft of people, we need to harden up and get them prepared for the real reality.
Regards
Caute_cautim
@AppDefects wrote:One of the fundamental problems is that security allows "exceptions" for business use cases that present risk.
Are you saying risk levels shouldn't be a factor in determining the security controls placed on systems or processes? Or the scenario where a security control is put in place but we allow an exception for certain individuals or processes? Just trying to clarify what you mean.