<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Examples of usage guidelines for AI assistant tools? in Governance, Risk, Compliance</title>
    <link>https://community.isc2.org/t5/Governance-Risk-Compliance/Examples-of-usage-guidelines-for-AI-assistant-tools/m-p/64131#M986</link>
    <description>&lt;P&gt;With all the various "AI assistant" tools becoming available, does anyone have any good examples of usage policies companies are using?&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Zoom, for instance, is now strongly promoting their "AI Companion" tool. There are MANY others. These tools can be incredibly useful when added to a Zoom call - providing transcripts, capturing action items, summarizing the call, etc.&amp;nbsp; And for a &lt;EM&gt;public&lt;/EM&gt; webinar or call where the recording will be shared publicly later, I don't personally have much of an issue with the tools.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;But my concern is when these tools may be used on &lt;EM&gt;internal&lt;/EM&gt; calls where internal / confidential / private information may be shared. The information (transcripts, recordings, etc.) is then being stored on other servers - and much worse, could potentially be used as "training data" for the AI large language model (LLM). If so, there &lt;EM&gt;could&lt;/EM&gt; be a case where some of that training data comes out in a response to someone. There's a risk there.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Beyond Zoom calls/webinars, people are definitely interested in using tools like ChatGPT, Bard, Claude, Bing Chat, etc. to rapidly summarize documents, create information, etc. Again, if it is summarizing a &lt;EM&gt;public&lt;/EM&gt; document, I don't see an issue. But if it is summarizing an &lt;EM&gt;internal&lt;/EM&gt; document, I see definite risks.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&lt;STRONG&gt;What are people doing in terms of usage policies for these kind of tools?&amp;nbsp;&lt;/STRONG&gt; Blocking all tools is obviously one approach, but there &lt;EM&gt;are&lt;/EM&gt; definite benefits and efficiencies that can come from these tools.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;(And yes, one approach I'm seeing some people do is to use one of the LLMs that can be installed on a premise so that it can be used for internal information - and all the data stays within the corporate network.)&lt;/P&gt;</description>
    <pubDate>Thu, 02 Nov 2023 16:09:03 GMT</pubDate>
    <dc:creator>danyork</dc:creator>
    <dc:date>2023-11-02T16:09:03Z</dc:date>
    <item>
      <title>Examples of usage guidelines for AI assistant tools?</title>
      <link>https://community.isc2.org/t5/Governance-Risk-Compliance/Examples-of-usage-guidelines-for-AI-assistant-tools/m-p/64131#M986</link>
      <description>&lt;P&gt;With all the various "AI assistant" tools becoming available, does anyone have any good examples of usage policies companies are using?&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Zoom, for instance, is now strongly promoting their "AI Companion" tool. There are MANY others. These tools can be incredibly useful when added to a Zoom call - providing transcripts, capturing action items, summarizing the call, etc.&amp;nbsp; And for a &lt;EM&gt;public&lt;/EM&gt; webinar or call where the recording will be shared publicly later, I don't personally have much of an issue with the tools.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;But my concern is when these tools may be used on &lt;EM&gt;internal&lt;/EM&gt; calls where internal / confidential / private information may be shared. The information (transcripts, recordings, etc.) is then being stored on other servers - and much worse, could potentially be used as "training data" for the AI large language model (LLM). If so, there &lt;EM&gt;could&lt;/EM&gt; be a case where some of that training data comes out in a response to someone. There's a risk there.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Beyond Zoom calls/webinars, people are definitely interested in using tools like ChatGPT, Bard, Claude, Bing Chat, etc. to rapidly summarize documents, create information, etc. Again, if it is summarizing a &lt;EM&gt;public&lt;/EM&gt; document, I don't see an issue. But if it is summarizing an &lt;EM&gt;internal&lt;/EM&gt; document, I see definite risks.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&lt;STRONG&gt;What are people doing in terms of usage policies for these kind of tools?&amp;nbsp;&lt;/STRONG&gt; Blocking all tools is obviously one approach, but there &lt;EM&gt;are&lt;/EM&gt; definite benefits and efficiencies that can come from these tools.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;(And yes, one approach I'm seeing some people do is to use one of the LLMs that can be installed on a premise so that it can be used for internal information - and all the data stays within the corporate network.)&lt;/P&gt;</description>
      <pubDate>Thu, 02 Nov 2023 16:09:03 GMT</pubDate>
      <guid>https://community.isc2.org/t5/Governance-Risk-Compliance/Examples-of-usage-guidelines-for-AI-assistant-tools/m-p/64131#M986</guid>
      <dc:creator>danyork</dc:creator>
      <dc:date>2023-11-02T16:09:03Z</dc:date>
    </item>
  </channel>
</rss>

