<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Security and Privacy Challenges of Large Language Models: A Survey in Threats</title>
    <link>https://community.isc2.org/t5/Threats/Security-and-Privacy-Challenges-of-Large-Language-Models-A/m-p/68686#M1146</link>
    <description>&lt;P&gt;Hi All&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;An interesting paper on the security and privacy challenges of Large Language Models:&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Large Language Models (LLMs) have demonstrated extraordinary capabilities and contributed to multiple&lt;BR /&gt;fields, such as generating and summarizing text, language translation, and question-answering. Nowadays,&lt;BR /&gt;LLM is becoming a very popular tool in computerized language processing tasks, with the capability to analyze&lt;BR /&gt;complicated linguistic patterns and provide relevant and appropriate responses depending on the context.&lt;BR /&gt;While offering significant advantages, these models are also vulnerable to security and privacy attacks, such&lt;BR /&gt;as jailbreaking attacks, data poisoning attacks, and Personally Identifiable Information (PII) leakage attacks.&lt;BR /&gt;This survey provides a thorough review of the security and privacy challenges of LLMs for both training data&lt;BR /&gt;and users, along with the application-based risks in various domains, such as transportation, education, and&lt;BR /&gt;healthcare. We assess the extent of LLM vulnerabilities, investigate emerging security and privacy attacks for&lt;BR /&gt;LLMs, and review the potential defense mechanisms. Additionally, the survey outlines existing research gaps&lt;BR /&gt;in this domain and highlights future research directions.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&lt;A href="https://arxiv.org/pdf/2402.00888.pdf" target="_blank" rel="noopener"&gt;https://arxiv.org/pdf/2402.00888.pdf&lt;/A&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Regards&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Caute_Cautim&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
    <pubDate>Mon, 25 Mar 2024 22:25:41 GMT</pubDate>
    <dc:creator>Caute_cautim</dc:creator>
    <dc:date>2024-03-25T22:25:41Z</dc:date>
    <item>
      <title>Security and Privacy Challenges of Large Language Models: A Survey</title>
      <link>https://community.isc2.org/t5/Threats/Security-and-Privacy-Challenges-of-Large-Language-Models-A/m-p/68686#M1146</link>
      <description>&lt;P&gt;Hi All&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;An interesting paper on the security and privacy challenges of Large Language Models:&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Large Language Models (LLMs) have demonstrated extraordinary capabilities and contributed to multiple&lt;BR /&gt;fields, such as generating and summarizing text, language translation, and question-answering. Nowadays,&lt;BR /&gt;LLM is becoming a very popular tool in computerized language processing tasks, with the capability to analyze&lt;BR /&gt;complicated linguistic patterns and provide relevant and appropriate responses depending on the context.&lt;BR /&gt;While offering significant advantages, these models are also vulnerable to security and privacy attacks, such&lt;BR /&gt;as jailbreaking attacks, data poisoning attacks, and Personally Identifiable Information (PII) leakage attacks.&lt;BR /&gt;This survey provides a thorough review of the security and privacy challenges of LLMs for both training data&lt;BR /&gt;and users, along with the application-based risks in various domains, such as transportation, education, and&lt;BR /&gt;healthcare. We assess the extent of LLM vulnerabilities, investigate emerging security and privacy attacks for&lt;BR /&gt;LLMs, and review the potential defense mechanisms. Additionally, the survey outlines existing research gaps&lt;BR /&gt;in this domain and highlights future research directions.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&lt;A href="https://arxiv.org/pdf/2402.00888.pdf" target="_blank" rel="noopener"&gt;https://arxiv.org/pdf/2402.00888.pdf&lt;/A&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Regards&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Caute_Cautim&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Mon, 25 Mar 2024 22:25:41 GMT</pubDate>
      <guid>https://community.isc2.org/t5/Threats/Security-and-Privacy-Challenges-of-Large-Language-Models-A/m-p/68686#M1146</guid>
      <dc:creator>Caute_cautim</dc:creator>
      <dc:date>2024-03-25T22:25:41Z</dc:date>
    </item>
  </channel>
</rss>

