<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic How AI can be hacked with prompt injection: NIST report in Tech Talk</title>
    <link>https://community.isc2.org/t5/Tech-Talk/How-AI-can-be-hacked-with-prompt-injection-NIST-report/m-p/70026#M4416</link>
    <description>&lt;P&gt;Hi All&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;The National Institute of Standards and Technology (NIST) closely observes the AI lifecycle, and for good reason. As AI proliferates, so does the discovery and exploitation of AI cybersecurity vulnerabilities. Prompt injection is one such vulnerability that specifically attacks &lt;A href="https://www.ibm.com/topics/generative-ai" target="_blank" rel="noopener nofollow"&gt;generative AI&lt;/A&gt;.&lt;/P&gt;&lt;P&gt;In &lt;A href="https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.100-2e2023.pdf" target="_blank" rel="noopener nofollow"&gt;Adversarial Machine Learning: A Taxonomy and Terminology of Attacks and Mitigations&lt;/A&gt;, NIST defines various adversarial machine learning (AML) &lt;A href="https://securityintelligence.com/posts/mapping-attacks-generative-ai-business-impact/" target="_blank"&gt;tactics and cyberattacks&lt;/A&gt;, like prompt injection, and advises users on how to mitigate and manage them. AML tactics extract information about how &lt;A href="https://www.ibm.com/topics/machine-learning?_ga=2.56851370.1630338845.1714967818-1058991947.1713851955&amp;amp;_gl=1*1vhufxo*_ga*MTA1ODk5MTk0Ny4xNzEzODUxOTU1*_ga_FYECCCS21D*MTcxNDk2NzgyMC41LjAuMTcxNDk2NzgyMC4wLjAuMA.." target="_blank" rel="noopener nofollow"&gt;machine learning (ML)&lt;/A&gt; systems behave to discover how they can be manipulated. That information is used to attack &lt;A href="https://www.ibm.com/topics/artificial-intelligence" target="_blank" rel="noopener nofollow"&gt;AI&lt;/A&gt; and its &lt;A href="https://www.ibm.com/topics/large-language-models" target="_blank" rel="noopener nofollow"&gt;large language models (LLMs)&lt;/A&gt; to circumvent security, bypass safeguards and open paths to exploit.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&lt;A href="https://securityintelligence.com/articles/ai-prompt-injection-nist-report/?utm_medium=OSocial&amp;amp;utm_source=Linkedin&amp;amp;utm_content=RSRWW&amp;amp;utm_id=IBMSecurityLIPostInjectionAttacks20240502&amp;amp;sf188168829=1" target="_blank"&gt;https://securityintelligence.com/articles/ai-prompt-injection-nist-report/?utm_medium=OSocial&amp;amp;utm_source=Linkedin&amp;amp;utm_content=RSRWW&amp;amp;utm_id=IBMSecurityLIPostInjectionAttacks20240502&amp;amp;sf188168829=1&lt;/A&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Regards&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Caute_Cautim&lt;/P&gt;</description>
    <pubDate>Mon, 06 May 2024 03:58:37 GMT</pubDate>
    <dc:creator>Caute_cautim</dc:creator>
    <dc:date>2024-05-06T03:58:37Z</dc:date>
    <item>
      <title>How AI can be hacked with prompt injection: NIST report</title>
      <link>https://community.isc2.org/t5/Tech-Talk/How-AI-can-be-hacked-with-prompt-injection-NIST-report/m-p/70026#M4416</link>
      <description>&lt;P&gt;Hi All&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;The National Institute of Standards and Technology (NIST) closely observes the AI lifecycle, and for good reason. As AI proliferates, so does the discovery and exploitation of AI cybersecurity vulnerabilities. Prompt injection is one such vulnerability that specifically attacks &lt;A href="https://www.ibm.com/topics/generative-ai" target="_blank" rel="noopener nofollow"&gt;generative AI&lt;/A&gt;.&lt;/P&gt;&lt;P&gt;In &lt;A href="https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.100-2e2023.pdf" target="_blank" rel="noopener nofollow"&gt;Adversarial Machine Learning: A Taxonomy and Terminology of Attacks and Mitigations&lt;/A&gt;, NIST defines various adversarial machine learning (AML) &lt;A href="https://securityintelligence.com/posts/mapping-attacks-generative-ai-business-impact/" target="_blank"&gt;tactics and cyberattacks&lt;/A&gt;, like prompt injection, and advises users on how to mitigate and manage them. AML tactics extract information about how &lt;A href="https://www.ibm.com/topics/machine-learning?_ga=2.56851370.1630338845.1714967818-1058991947.1713851955&amp;amp;_gl=1*1vhufxo*_ga*MTA1ODk5MTk0Ny4xNzEzODUxOTU1*_ga_FYECCCS21D*MTcxNDk2NzgyMC41LjAuMTcxNDk2NzgyMC4wLjAuMA.." target="_blank" rel="noopener nofollow"&gt;machine learning (ML)&lt;/A&gt; systems behave to discover how they can be manipulated. That information is used to attack &lt;A href="https://www.ibm.com/topics/artificial-intelligence" target="_blank" rel="noopener nofollow"&gt;AI&lt;/A&gt; and its &lt;A href="https://www.ibm.com/topics/large-language-models" target="_blank" rel="noopener nofollow"&gt;large language models (LLMs)&lt;/A&gt; to circumvent security, bypass safeguards and open paths to exploit.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&lt;A href="https://securityintelligence.com/articles/ai-prompt-injection-nist-report/?utm_medium=OSocial&amp;amp;utm_source=Linkedin&amp;amp;utm_content=RSRWW&amp;amp;utm_id=IBMSecurityLIPostInjectionAttacks20240502&amp;amp;sf188168829=1" target="_blank"&gt;https://securityintelligence.com/articles/ai-prompt-injection-nist-report/?utm_medium=OSocial&amp;amp;utm_source=Linkedin&amp;amp;utm_content=RSRWW&amp;amp;utm_id=IBMSecurityLIPostInjectionAttacks20240502&amp;amp;sf188168829=1&lt;/A&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Regards&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Caute_Cautim&lt;/P&gt;</description>
      <pubDate>Mon, 06 May 2024 03:58:37 GMT</pubDate>
      <guid>https://community.isc2.org/t5/Tech-Talk/How-AI-can-be-hacked-with-prompt-injection-NIST-report/m-p/70026#M4416</guid>
      <dc:creator>Caute_cautim</dc:creator>
      <dc:date>2024-05-06T03:58:37Z</dc:date>
    </item>
  </channel>
</rss>

