<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Persistent Threat in Large Language Models in Tech Talk</title>
    <link>https://community.isc2.org/t5/Tech-Talk/Persistent-Threat-in-Large-Language-Models/m-p/73079#M4514</link>
    <description>&lt;P&gt;Hi All&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P class=""&gt;Prompt injection has become a prominent area of focus in AI security.&amp;nbsp;&amp;nbsp;Despite extensive discussions on the subject, the actual business risks posed by prompt injection remain unclear.&amp;nbsp;&amp;nbsp;For instance, what are the potential disruptions if an LLM provides incorrect/dangerous information? Can a single bad response propagate through a system and cause significant damage?&lt;/P&gt;&lt;P class=""&gt;&amp;nbsp;&lt;/P&gt;&lt;P class=""&gt;To illustrate this, let's consider a real-world application.&amp;nbsp;&amp;nbsp;An AI-powered recruiting system could use Retrieval-Augmented Generation (RAG) to fetch relevant CVs and then ask an LLM to summarize and score these CVs. To mitigate prompt injection, the system might employ a second LLM to validate the output from the first LLM. How can prompt injections embedded in a CV persist throughout this pipeline?&lt;/P&gt;&lt;P class=""&gt;&amp;nbsp;&lt;/P&gt;&lt;P class=""&gt;&lt;A href="https://www.linkedin.com/pulse/persistent-threat-large-language-models-chenta-lee-mxwge/?trackingId=wKyPxMdrpY%2BQ%2FQ650JsJzw%3D%3D" target="_blank" rel="noopener"&gt;https://www.linkedin.com/pulse/persistent-threat-large-language-models-chenta-lee-mxwge/?trackingId=wKyPxMdrpY%2BQ%2FQ650JsJzw%3D%3D&lt;/A&gt;&lt;/P&gt;&lt;P class=""&gt;&amp;nbsp;&lt;/P&gt;&lt;P class=""&gt;Regards&lt;/P&gt;&lt;P class=""&gt;&amp;nbsp;&lt;/P&gt;&lt;P class=""&gt;Caute_Cautim&lt;/P&gt;&lt;P class=""&gt;&amp;nbsp;&lt;/P&gt;</description>
    <pubDate>Wed, 14 Aug 2024 00:04:25 GMT</pubDate>
    <dc:creator>Caute_cautim</dc:creator>
    <dc:date>2024-08-14T00:04:25Z</dc:date>
    <item>
      <title>Persistent Threat in Large Language Models</title>
      <link>https://community.isc2.org/t5/Tech-Talk/Persistent-Threat-in-Large-Language-Models/m-p/73079#M4514</link>
      <description>&lt;P&gt;Hi All&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P class=""&gt;Prompt injection has become a prominent area of focus in AI security.&amp;nbsp;&amp;nbsp;Despite extensive discussions on the subject, the actual business risks posed by prompt injection remain unclear.&amp;nbsp;&amp;nbsp;For instance, what are the potential disruptions if an LLM provides incorrect/dangerous information? Can a single bad response propagate through a system and cause significant damage?&lt;/P&gt;&lt;P class=""&gt;&amp;nbsp;&lt;/P&gt;&lt;P class=""&gt;To illustrate this, let's consider a real-world application.&amp;nbsp;&amp;nbsp;An AI-powered recruiting system could use Retrieval-Augmented Generation (RAG) to fetch relevant CVs and then ask an LLM to summarize and score these CVs. To mitigate prompt injection, the system might employ a second LLM to validate the output from the first LLM. How can prompt injections embedded in a CV persist throughout this pipeline?&lt;/P&gt;&lt;P class=""&gt;&amp;nbsp;&lt;/P&gt;&lt;P class=""&gt;&lt;A href="https://www.linkedin.com/pulse/persistent-threat-large-language-models-chenta-lee-mxwge/?trackingId=wKyPxMdrpY%2BQ%2FQ650JsJzw%3D%3D" target="_blank" rel="noopener"&gt;https://www.linkedin.com/pulse/persistent-threat-large-language-models-chenta-lee-mxwge/?trackingId=wKyPxMdrpY%2BQ%2FQ650JsJzw%3D%3D&lt;/A&gt;&lt;/P&gt;&lt;P class=""&gt;&amp;nbsp;&lt;/P&gt;&lt;P class=""&gt;Regards&lt;/P&gt;&lt;P class=""&gt;&amp;nbsp;&lt;/P&gt;&lt;P class=""&gt;Caute_Cautim&lt;/P&gt;&lt;P class=""&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Wed, 14 Aug 2024 00:04:25 GMT</pubDate>
      <guid>https://community.isc2.org/t5/Tech-Talk/Persistent-Threat-in-Large-Language-Models/m-p/73079#M4514</guid>
      <dc:creator>Caute_cautim</dc:creator>
      <dc:date>2024-08-14T00:04:25Z</dc:date>
    </item>
  </channel>
</rss>

