The software related roles out there are probably described as assurance roles, rather than specifically software security. With companies using SAST, DAST and RASP there will be some that think these to be a silver bullet and ignore the requirements, architecture and design aspect altogether. It's something that certain teams that believe Agile means no design and no documentation play on, so they can crank out quirk and dirty software whilst describing the defects, including lack of adequate security as technical debt. And so the analogy is interpreted as something that can be repaid long term or rolled over, rather than something which could produce in a breach at any time. He appropriate lens is risk, not debt.
What I have found is that you can teach software security engineering principles to a few developers but that you can't always be successfully in teaching it to networking engineers for various reasons.
The big problem with application security today is the move to Agile. Few Fortune 100 companies do it they still are in love with Waterfall. Even CMU SEI endorses it here. In another article which contrasts that approach to Agile is Making Security in a Software Factory
Application security is at a crossroads:
@AppDefects . You are correct. Based on my experience, in order for real progress to be made, people need to think outside of the frameworks and look at the methods and the problems they help solve. Just as with any basic engineering, we need to elicitate the problem before looking at the various solutions to the problems. As an added bonus to this, this approach not only ensures we don't get confined to the methods of any one particular methodology, but when done well, it enables the effective measurement of the efficacy of many of the methods .
For more than two decades I have watched as debates raged between "Waterfall" and "Agile", "RUP vs RAD" or "agile vs XP" and it follows a similar pattern to developers arguing over whether C++ or .Net is the best language. Of course, when performed effectively, each set of methods is best suited to different situations.
Unfortunately, just like all methodologies, they are most likely to fail at the implementation level. It's the old saying that "if all you have is a hammer, everything looks like a nail". People are most likely to advocate for methods, tools and processes they are most familiar with and can quite often apply them to the wrong domains. The software tooling arena is notorious for this.
And then there is often a divide between the process expertise and the technologies used to implement the process. When either team dominates the outcomes are compromised. One large organisation I remember actually had the process guys and the tooling guys located in separate cities. In this case, the process guys dominated and the tooling guys were forced to effectively butcher the capabilities of the tools so it would fit into the Visio diagrams provided. It works the opposite way as well (more often so) with sporadic tooling implementations pushing out features with no well defined process or alignment.
As an example, I understand Winston Royces' original thesis formalising the Waterfall methodology actually stated that "teams should go through this process multiple times" - or words to that effect. The idea of it being iterative was already there even back in the 1970s. Unfortunately this is not how it was taught and definitely not how it was attempted to be applied. In fact, I would actually say that whilst the waterfall methodology is based on the core premise or hope that requirements will not change, every substantially large waterfall implementation ends up being iterative as the Change Requests come through. The question is how is it handled and whether you have the maturity of the processes, supporting systems and personnel capability to manage whatever approach you have.
On the other hand, the Agile camp have focussed more on speed of delivery and feature delivery on the premise that if problems are found they can be addressed rapidly. Now this can be very effective if you're developing a non safety critical webapp, but if you're going to be waiting 6 months for UL to test and approve your product for sale, as an example, it's a disastrous mentality to have. Whilst there have been attempts to do Agility at scale, they have their own issues - not the least of which being how they risk of overloading the architects and in it's purest form, multifunctional teams running roughshot over Centralised business systems (Databases and Security).
Over the decades, I have always worked to apply the best method to the right problem space. This can involve some experimentation, but it does work well. I have used well known and lesser known methods regardless of their branding. Heck, one of the most powerful and effective methods I have applied originated in the Prussian military doctrine of the mid 1800s (Helmuth Moltke the Elder). This inspired an extremely scalable solution which had managed centralised control with decentralised enablement.
Some of the SCM principles I have taught and had my teams teach over the years actually cite configuration Management of Carthaginian ships in the third Punic war. The concepts are the same, and having physical examples can be a powerful means of communication. Obviously with the complexities of modern systems, the nuances differ, but the core concepts I have found to be quite stable.
The balances can be achieved when we don't box ourselves into this tribe or that tribe.