<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Evolving red teaming for AI environments in Tech Talk</title>
    <link>https://community.isc2.org/t5/Tech-Talk/Evolving-red-teaming-for-AI-environments/m-p/70269#M4421</link>
    <description>&lt;P&gt;Hi All&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;As AI becomes more ingrained in businesses and daily life, the importance of security grows more paramount. In fact, according to the &lt;A href="https://www.ibm.com/thought-leadership/institute-business-value/en-us/report/ceo-generative-ai/cybersecurity?WHB=1&amp;amp;Channel=vel-solp&amp;amp;field_insight_category_target_id=466&amp;amp;wtime=%7Bseek_to_second_number%7D&amp;amp;page=%2C%2C1%2C0#!/Resources" target="_blank"&gt;IBM Institute for Business Value&lt;/A&gt;, 96% of executives say adopting &lt;A href="https://www.ibm.com/topics/generative-ai" target="_blank"&gt;generative AI&lt;/A&gt; (GenAI) makes a security breach likely in their organization in the next three years. Whether it’s a model performing unintended actions, &lt;A href="https://securityintelligence.com/posts/unmasking-hypnotized-ai-hidden-risks-large-language-models/" target="_blank"&gt;generating misleading or harmful responses&lt;/A&gt;&amp;nbsp;or revealing sensitive information, in the AI era security can no longer be an afterthought to innovation.&lt;/P&gt;&lt;P&gt;AI red teaming is emerging as one of the most effective first steps businesses can take to ensure safe and secure systems today. But security teams can’t approach testing AI the same way they do software or applications. You need to understand AI to test it. Bringing in knowledge of data science is imperative — without that skill, there’s a high risk of ‘false’ reports of safe and secure AI models and systems, widening the window of opportunity for attackers.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&lt;A href="https://securityintelligence.com/x-force/evolving-red-teaming-ai-environments/" target="_blank"&gt;https://securityintelligence.com/x-force/evolving-red-teaming-ai-environments/&lt;/A&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Regards&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Caute_Cautim&lt;/P&gt;</description>
    <pubDate>Sun, 12 May 2024 21:57:32 GMT</pubDate>
    <dc:creator>Caute_cautim</dc:creator>
    <dc:date>2024-05-12T21:57:32Z</dc:date>
    <item>
      <title>Evolving red teaming for AI environments</title>
      <link>https://community.isc2.org/t5/Tech-Talk/Evolving-red-teaming-for-AI-environments/m-p/70269#M4421</link>
      <description>&lt;P&gt;Hi All&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;As AI becomes more ingrained in businesses and daily life, the importance of security grows more paramount. In fact, according to the &lt;A href="https://www.ibm.com/thought-leadership/institute-business-value/en-us/report/ceo-generative-ai/cybersecurity?WHB=1&amp;amp;Channel=vel-solp&amp;amp;field_insight_category_target_id=466&amp;amp;wtime=%7Bseek_to_second_number%7D&amp;amp;page=%2C%2C1%2C0#!/Resources" target="_blank"&gt;IBM Institute for Business Value&lt;/A&gt;, 96% of executives say adopting &lt;A href="https://www.ibm.com/topics/generative-ai" target="_blank"&gt;generative AI&lt;/A&gt; (GenAI) makes a security breach likely in their organization in the next three years. Whether it’s a model performing unintended actions, &lt;A href="https://securityintelligence.com/posts/unmasking-hypnotized-ai-hidden-risks-large-language-models/" target="_blank"&gt;generating misleading or harmful responses&lt;/A&gt;&amp;nbsp;or revealing sensitive information, in the AI era security can no longer be an afterthought to innovation.&lt;/P&gt;&lt;P&gt;AI red teaming is emerging as one of the most effective first steps businesses can take to ensure safe and secure systems today. But security teams can’t approach testing AI the same way they do software or applications. You need to understand AI to test it. Bringing in knowledge of data science is imperative — without that skill, there’s a high risk of ‘false’ reports of safe and secure AI models and systems, widening the window of opportunity for attackers.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&lt;A href="https://securityintelligence.com/x-force/evolving-red-teaming-ai-environments/" target="_blank"&gt;https://securityintelligence.com/x-force/evolving-red-teaming-ai-environments/&lt;/A&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Regards&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Caute_Cautim&lt;/P&gt;</description>
      <pubDate>Sun, 12 May 2024 21:57:32 GMT</pubDate>
      <guid>https://community.isc2.org/t5/Tech-Talk/Evolving-red-teaming-for-AI-environments/m-p/70269#M4421</guid>
      <dc:creator>Caute_cautim</dc:creator>
      <dc:date>2024-05-12T21:57:32Z</dc:date>
    </item>
  </channel>
</rss>

