Jim Dempsey wrote n excellent blog post, " Addressing the Security Risks of AI" https://www.lawfareblog.com/addressing-security-risks-ai
In it, he writes, "Our report also recommends more collaboration between cybersecurity practitioners, machine learning engineers, and adversarial machine learning researchers. Assessing AI vulnerabilities requires technical expertise that is distinct from the skill set of cybersecurity practitioners, and organizations should be cautioned against repurposing existing security teams without additional training and resources. We also note that AI security researchers and practitioners should consult with those addressing AI bias. AI fairness researchers have extensively studied how poor data, design choices, and risk decisions can produce biased outcomes. Since AI vulnerabilities may be more analogous to algorithmic bias than they are to traditional software vulnerabilities, it is important to cultivate greater engagement between the two communities. "
What are your thoughts?
I think there is more fundamental elements and governance required, frameworks such as this one: https://www.ibm.com/topics/ai-ethics
Plus the fact, that all cyber security practitioners, need to understand the fundamentals such as the AI ladder, as to what the good data means, and how best to collect that data to create appropriate models.
In particular the privacy by design aspects as well.
Collaboration with Cybersecurity and privacy at the initial stages is essential and throughout the development process.
At the moment, we appear to be in the mode of new technology, lets try it, regardless of the outcomes, and implications or the impact, you need to know what is expected - rather like the old adage of "garbage in, garbage out".
Regards
Caute_Cautim