cancel
Showing results for 
Show  only  | Search instead for 
Did you mean: 
cancel
Showing results for 
Show  only  | Search instead for 
Did you mean: 
Caute_cautim
Community Champion

How AI can be hacked with prompt injection: NIST report

Hi All

 

The National Institute of Standards and Technology (NIST) closely observes the AI lifecycle, and for good reason. As AI proliferates, so does the discovery and exploitation of AI cybersecurity vulnerabilities. Prompt injection is one such vulnerability that specifically attacks generative AI.

In Adversarial Machine Learning: A Taxonomy and Terminology of Attacks and Mitigations, NIST defines various adversarial machine learning (AML) tactics and cyberattacks, like prompt injection, and advises users on how to mitigate and manage them. AML tactics extract information about how machine learning (ML) systems behave to discover how they can be manipulated. That information is used to attack AI and its large language models (LLMs) to circumvent security, bypass safeguards and open paths to exploit.

 

https://securityintelligence.com/articles/ai-prompt-injection-nist-report/?utm_medium=OSocial&utm_so...

 

Regards

 

Caute_Cautim

0 Replies