cancel
Showing results for 
Show  only  | Search instead for 
Did you mean: 
cancel
Showing results for 
Show  only  | Search instead for 
Did you mean: 
singhmanmeet
Newcomer I

Setting Global common baseline security requirements standard for all AI technologies?

 

Is it possible to set a Global common baseline security requirements standard for all AI technologies?

If answer is yes then how will it happen?

 

The UK Government's Department for Science, Innovation & Technology (DSIT) has an open consultation on cybersecurity risks related to AI and is proactively seeking input from ISC2 members. DSIT is proposing to create a voluntary Code of Conduct which will set baseline security requirements for all AI technologies.
This Code of Conduct will then be submitted to the European Telecommunications Standards Institute (ETSI) to guide the development of a global standard on the cybersecurity for AI systems and models.
How the Code and technical standard will help everyone? Please share your Key takeaways and benefits?

1 Reply
Chinatu
Newcomer II

Definitely and the best approach to harmonize the overwhelming threats and abuses around AI Technology. I perceive tackling and managing every Technology in AI from the perspective of International Police(InterPol) and Budapest Convention Point of view. Mandatory and end-to-end global Security Enforcement and baseline with appropriate legal sanctions, prosecutions and fines on offences traceable to abuses, misuses and threats. But the Standard has to be driven from leading Standard Frameworks such as NIST, ISO, CISA, National Cybersecurity Alliance and most importantly from Privacy Regulation Frameworks of the various jurisdictions that would in turn circulate to other member Countries as minimum standards and requirements in every AI Technology.
Again, embedding the AI Security Policy as an Issue-Specific Policy added to the Control Objectives in ISO 27001 could be the best approach in enforcing the Security baseline in AI across the globe. As organizations, are being mandated to certify their business processes with ISO 27001, AI Security would be part of the Control Objectives to enforce with ISO 27001 certification. ISO 27001 certification could be driven from the governing bodies across sectors of the individual
jurisdictions to ensure every sector and business is running with the ISO 27001 Control Objectives as the baseline.
The AI Data Privacy Regulation could also be inculcated into the Data Privacy Regulation of each Jurisdiction such as GDPR, NDPR with necessary enforcement strategies. AI Security enforcements through the InterPol on CyberCrimes could be the best deterrent approach. For example, the same way there are standardized sections and code of crimes in law, we could establish the codes of AI offences/Crimes that could be prosecuted with Sanctions and necessary remediations or jail against the Offenders. With this, the law enforcement agents could leverage on those codes to punish and at the same time enforce order.
I like the initiative of the Organization for Economic Corporation and Development (OECD) in enforcing good Ethical Code of Conduct in AI. About 43 member countries have already plugged into the AI Security Principles and governance driven from the OECD. Driving the AI Ethical Code of Conduct initiative across all countries from OECD perspective should be embraced through the unions and continents such as African Union(AUC) and others.
We really do not rely on one arm of the agencies mentioned above, we could deliberately run with all agencies with the necessary AI Security Standards and specifications for enforcement.
The ongoing concern is the drafting of
the end-to-end AI Issue-Specific Policy with every aspect of AI technology inculcated into the Policy. Also, liaising with the authority of the InterPol and necessary legal bodies for drafting the codes of law and sanctions around crimes and offences traceable to AI. Harmonizing the AI offences across the globe would be required. Much of collaborations with the regulatory bodies across the globe would be required.
We need to leverage on the AI Frameworks to ensure we capture a detailed and concise AI Security baseline such as ISO 42001, the Google Secure AI Framework, the NIST AI Management Framework, the OECD guidelines on AI Governance and Principles as well as aligning AI threats with the OWASP Top Ten threats on Large Language Model(LLM) and their countermeasures.
The initiative of DSIT is a great MOVE with clear benefits if we get the AI risk Analysis and findings right. We may need to liaise with the AI Stakeholders especially the AI Corporate leaders on detailed analysis of the risks per key area to ascertain the ideal code of conduct around that area. We may need to also liaise with major or leading customers/Consumers of AI technology to practically understand the various security issues and concerns to ascertain the required Ethical conduct necessary to counter the risks.
Such codes of conduct would deter the bad guys and at the same time grant everyone with the spot-on sanctions and prosecutions for the Offenders. Driving the ethics centrally would discourage criminals from the jurisdictions with no specific AI rules to run with. I hope the above helps.