cancel
Showing results for 
Show  only  | Search instead for 
Did you mean: 
cancel
Showing results for 
Show  only  | Search instead for 
Did you mean: 
Caute_cautim
Community Champion

ISO/IEC 42001:2023 for AI Management Systems published

Hi All

 

ISO/IEC 42001:2023 for AI Management Systems was published yesterday.

 

My suspicion is that it will become as important to the AI world as ISO/IEC 27001 arguably became the most important standard for information security management systems.

The standard provides a comprehensive framework for establishing, implementing, maintaining, and improving an artificial intelligence management system within organisations. It aims to ensure responsible AI development, deployment, and use, addressing ethical implications, data quality, and risk management. This set of guidelines is designed to integrate AI management with organisational processes, focusing on risk management and offering detailed implementation controls.

Key aspects of the standard include performance measurement, emphasising both quantitative and qualitative outcomes, and the importance of AI systems’ effectiveness in achieving intended results. It mandates conformity to requirements and systematic audits to assess AI systems. The standard also highlights the need for thorough assessment of AI's impact on society and individuals, stressing data quality to meet organisational needs.

Organisations are required to document controls for AI systems and rationalise their decisions, underscoring the role of governance in ensuring performance and conformance. The standard calls for adapting management systems to include AI-specific considerations like ethical use, transparency, and accountability. It also requires continuous performance evaluation and improvement, ensuring AI systems' benefits and safety.

 

ISO/IEC 42001:2023 aligns closely with the EU AI Act. The AI Act classifies AI systems into prohibited and high-risk categories, each with distinct compliance obligations. ISO/IEC 42001:2023's focus on ethical AI management, risk management, data quality, and transparency aligns with these categories, providing a pathway for meeting the AI Act’s requirements.

The AI Act's prohibitions include specific AI systems like biometric categorisation and untargeted scraping for facial recognition. The standard may help guide organisations in identifying and discontinuing such applications. For high-risk AI systems, the AI Act mandates comprehensive risk management, registration, data governance, and transparency, which the ISO/IEC 42001:2023 framework could support. It could assist providers of high-risk AI systems in establishing risk management frameworks and maintaining operational logs, ensuring non-discriminatory, rights-respecting systems.

ISO/IEC 42001:2023 may also aid users of high-risk AI systems in fulfilling obligations like human oversight and cybersecurity. It could potentially assist in managing foundation models and General Purpose AI (GPAI), necessary under the AI Act.

This new standard offers a comprehensive approach to managing AI systems, aiding organisations in developing AI that respects fundamental rights and ethical standards.
 
Regards
 
Caute_Cautim
 
3 Replies
Early_Adopter
Community Champion

Once it’s passing Turing tests, I wonder if we’ll respect its rights as well?
Caute_cautim
Community Champion

@Early_Adopter 

 

Not once it is replicating itself, generating self bias, and discriminatory factoids which everyone believes - without question of course.

 

Regards

 

Caute_Cautim

Early_Adopter
Community Champion

Now then, if it does that it’s just a proper grown-up threat actor… the factoid core from Portal two I recon…

https://m.youtube.com/watch?v=CFBVkXir2qs


Not in this one but it’s still fun.