<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Generative tools:  What is your policy going to be? in Governance, Risk, Compliance</title>
    <link>https://community.isc2.org/t5/Governance-Risk-Compliance/Generative-tools-What-is-your-policy-going-to-be/m-p/59983#M889</link>
    <description>&lt;P&gt;Hi All&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;A very good question, so how is your organisation going to cope with AI and Generative tools?&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;What policy are your respective organisations going to apply?&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;What risks will you encounter?&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Will you encounter data leakage of sensitive information?&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&lt;A href="https://www.lexology.com/library/detail.aspx?g=8b138f0b-e96b-437e-9351-715d21a56973" target="_blank" rel="noopener"&gt;https://www.lexology.com/library/detail.aspx?g=8b138f0b-e96b-437e-9351-715d21a56973&lt;/A&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Or do you think Asimov got it right?&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&lt;SPAN class=""&gt;&lt;SPAN&gt;&lt;STRONG&gt;First Law&lt;/STRONG&gt;&lt;BR /&gt;A robot [AI System] may not injure a human being or, through inaction, allow a human being to come to harm.&lt;BR /&gt;&lt;BR /&gt;&lt;STRONG&gt;Second Law&lt;/STRONG&gt;&lt;BR /&gt;A robot [AI System] must obey the orders given it by human beings except where such orders would conflict with the First Law.&lt;BR /&gt;&lt;BR /&gt;&lt;STRONG&gt;Third Law&lt;/STRONG&gt;&lt;BR /&gt;A robot [AI System] must protect its own existence as long as such protection does not conflict with the First or Second Law.&lt;BR /&gt;&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Regards&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Caute_Cautim&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
    <pubDate>Mon, 09 Oct 2023 10:35:19 GMT</pubDate>
    <dc:creator>Caute_cautim</dc:creator>
    <dc:date>2023-10-09T10:35:19Z</dc:date>
    <item>
      <title>Generative tools:  What is your policy going to be?</title>
      <link>https://community.isc2.org/t5/Governance-Risk-Compliance/Generative-tools-What-is-your-policy-going-to-be/m-p/59983#M889</link>
      <description>&lt;P&gt;Hi All&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;A very good question, so how is your organisation going to cope with AI and Generative tools?&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;What policy are your respective organisations going to apply?&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;What risks will you encounter?&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Will you encounter data leakage of sensitive information?&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&lt;A href="https://www.lexology.com/library/detail.aspx?g=8b138f0b-e96b-437e-9351-715d21a56973" target="_blank" rel="noopener"&gt;https://www.lexology.com/library/detail.aspx?g=8b138f0b-e96b-437e-9351-715d21a56973&lt;/A&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Or do you think Asimov got it right?&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&lt;SPAN class=""&gt;&lt;SPAN&gt;&lt;STRONG&gt;First Law&lt;/STRONG&gt;&lt;BR /&gt;A robot [AI System] may not injure a human being or, through inaction, allow a human being to come to harm.&lt;BR /&gt;&lt;BR /&gt;&lt;STRONG&gt;Second Law&lt;/STRONG&gt;&lt;BR /&gt;A robot [AI System] must obey the orders given it by human beings except where such orders would conflict with the First Law.&lt;BR /&gt;&lt;BR /&gt;&lt;STRONG&gt;Third Law&lt;/STRONG&gt;&lt;BR /&gt;A robot [AI System] must protect its own existence as long as such protection does not conflict with the First or Second Law.&lt;BR /&gt;&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Regards&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Caute_Cautim&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Mon, 09 Oct 2023 10:35:19 GMT</pubDate>
      <guid>https://community.isc2.org/t5/Governance-Risk-Compliance/Generative-tools-What-is-your-policy-going-to-be/m-p/59983#M889</guid>
      <dc:creator>Caute_cautim</dc:creator>
      <dc:date>2023-10-09T10:35:19Z</dc:date>
    </item>
    <item>
      <title>Re: Generative tools:  What is your policy going to be?</title>
      <link>https://community.isc2.org/t5/Governance-Risk-Compliance/Generative-tools-What-is-your-policy-going-to-be/m-p/59992#M891</link>
      <description>&lt;P&gt;ChatGPT will develop a perfectly reasonable sounding policy for you.&amp;nbsp; Just ask it.&amp;nbsp;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;More seriously, our current stance is to embargo its use until the hype has worn down a bit, the experts have replaced their gut-level reactions with fact-based advice, and we all know better what questions to ask.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;In a few months, I anticipate&amp;nbsp;&lt;FONT face="inherit"&gt;that any use of such tools will end up going through our standard "application" vetting process, with a particular emphasis on data-protection and legal-liability (&lt;/FONT&gt;e.g.,&lt;FONT face="inherit"&gt;&amp;nbsp;who pays when its bad advice kills someone?).&amp;nbsp;&amp;nbsp;&lt;/FONT&gt;&lt;/P&gt;</description>
      <pubDate>Wed, 14 Jun 2023 03:44:18 GMT</pubDate>
      <guid>https://community.isc2.org/t5/Governance-Risk-Compliance/Generative-tools-What-is-your-policy-going-to-be/m-p/59992#M891</guid>
      <dc:creator>denbesten</dc:creator>
      <dc:date>2023-06-14T03:44:18Z</dc:date>
    </item>
  </channel>
</rss>

