<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: Risks of Generative AI - HBR in Governance, Risk, Compliance</title>
    <link>https://community.isc2.org/t5/Governance-Risk-Compliance/Risks-of-Generative-AI-HBR/m-p/62166#M940</link>
    <description>&lt;P&gt;I will never disagree that Generative AI runs a huge potential of being misused by folks for their evil intentions but at the same time we have to appreciate so much that generative AI is doing for us lately. From helping kids with school work, research to assisting content creators with high quality materials and making lives easy for millions with such high degree of automation that t it offers. Something THAT beneficial will sure be abused in the wrong hands hence the immediate need for rules and regulations regarding AI generated content. I myself use a &lt;A href="https://wavel.ai/studio/ai-voice-cloner/" target="_blank" rel="noopener"&gt;voice cloning AI&lt;/A&gt; to create professional voiceovers for my video content using my own voice as a reference and that saves me a great deal of expense in mic setup and expensive editing software and dubbing artist remunerations as the application itself allows me to dub my voice in multiple languages. Thus keeping such huge benefits in mind that we are reaping from AI, the regulations must be introduced ASAP, so that people can use AI without any fear.&lt;/P&gt;</description>
    <pubDate>Tue, 22 Aug 2023 10:49:54 GMT</pubDate>
    <dc:creator>annianni</dc:creator>
    <dc:date>2023-08-22T10:49:54Z</dc:date>
    <item>
      <title>Risks of Generative AI - HBR</title>
      <link>https://community.isc2.org/t5/Governance-Risk-Compliance/Risks-of-Generative-AI-HBR/m-p/61723#M931</link>
      <description>&lt;P&gt;Hi All&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;A very interesting piece, something to think about at the corporate board level:&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&lt;A href="https://hbr.org/2023/08/generative-ai-nxiety" target="_blank"&gt;https://hbr.org/2023/08/generative-ai-nxiety&lt;/A&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Ask yourselves:&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;OL&gt;&lt;LI&gt;What ethical, reputational, regulatory, and legal risks does generative AI share with non-generative AI?&lt;/LI&gt;&lt;LI&gt;What ethical, reputational, regulatory, and legal risks are particular to, or exacerbated by, generative AI compared to non-generative AI?&lt;/LI&gt;&lt;/OL&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Regards&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Caute_Cautim&lt;/P&gt;</description>
      <pubDate>Mon, 09 Oct 2023 10:42:14 GMT</pubDate>
      <guid>https://community.isc2.org/t5/Governance-Risk-Compliance/Risks-of-Generative-AI-HBR/m-p/61723#M931</guid>
      <dc:creator>Caute_cautim</dc:creator>
      <dc:date>2023-10-09T10:42:14Z</dc:date>
    </item>
    <item>
      <title>Re: Risks of Generative AI - HBR</title>
      <link>https://community.isc2.org/t5/Governance-Risk-Compliance/Risks-of-Generative-AI-HBR/m-p/61728#M932</link>
      <description>&lt;P&gt;Hii,&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&lt;SPAN&gt;The article "Generative AI-nxiety" explores critical concerns at the corporate board level regarding generative AI. Both generative and non-generative AI pose ethical, reputational, regulatory, and legal risks related to bias, privacy, accountability, and unintended consequences.&lt;/SPAN&gt;&lt;/P&gt;</description>
      <pubDate>Wed, 16 Aug 2023 10:10:07 GMT</pubDate>
      <guid>https://community.isc2.org/t5/Governance-Risk-Compliance/Risks-of-Generative-AI-HBR/m-p/61728#M932</guid>
      <dc:creator>Paigereed</dc:creator>
      <dc:date>2023-08-16T10:10:07Z</dc:date>
    </item>
    <item>
      <title>Re: Risks of Generative AI - HBR</title>
      <link>https://community.isc2.org/t5/Governance-Risk-Compliance/Risks-of-Generative-AI-HBR/m-p/62166#M940</link>
      <description>&lt;P&gt;I will never disagree that Generative AI runs a huge potential of being misused by folks for their evil intentions but at the same time we have to appreciate so much that generative AI is doing for us lately. From helping kids with school work, research to assisting content creators with high quality materials and making lives easy for millions with such high degree of automation that t it offers. Something THAT beneficial will sure be abused in the wrong hands hence the immediate need for rules and regulations regarding AI generated content. I myself use a &lt;A href="https://wavel.ai/studio/ai-voice-cloner/" target="_blank" rel="noopener"&gt;voice cloning AI&lt;/A&gt; to create professional voiceovers for my video content using my own voice as a reference and that saves me a great deal of expense in mic setup and expensive editing software and dubbing artist remunerations as the application itself allows me to dub my voice in multiple languages. Thus keeping such huge benefits in mind that we are reaping from AI, the regulations must be introduced ASAP, so that people can use AI without any fear.&lt;/P&gt;</description>
      <pubDate>Tue, 22 Aug 2023 10:49:54 GMT</pubDate>
      <guid>https://community.isc2.org/t5/Governance-Risk-Compliance/Risks-of-Generative-AI-HBR/m-p/62166#M940</guid>
      <dc:creator>annianni</dc:creator>
      <dc:date>2023-08-22T10:49:54Z</dc:date>
    </item>
  </channel>
</rss>

