<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: ChatGPT data handling in Governance, Risk, Compliance</title>
    <link>https://community.isc2.org/t5/Governance-Risk-Compliance/ChatGPT-data-handling/m-p/58328#M845</link>
    <description>&lt;P&gt;The quality of the English is only one of a number of indicators that an email is phishy.&amp;nbsp; It's important to look for the other signs, such as, Are you a named addressee?&amp;nbsp; Does it&amp;nbsp;prompt to act with urgency? Does it&amp;nbsp;attempt to mimic someone senior in your organisation?&amp;nbsp; Does it sound too good to be true?&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
    <pubDate>Thu, 06 Apr 2023 10:09:42 GMT</pubDate>
    <dc:creator>Steve-Wilme</dc:creator>
    <dc:date>2023-04-06T10:09:42Z</dc:date>
    <item>
      <title>ChatGPT data handling</title>
      <link>https://community.isc2.org/t5/Governance-Risk-Compliance/ChatGPT-data-handling/m-p/58069#M833</link>
      <description>&lt;P&gt;What do you think are the risks of generative AIs such as ChatGPT with regards to data storing (user inputs/prompts), data handling e.g ChatGPT storing user inputs / output (where and how and is it concerning?) and how would you go about risk management.&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Sat, 25 Mar 2023 20:10:52 GMT</pubDate>
      <guid>https://community.isc2.org/t5/Governance-Risk-Compliance/ChatGPT-data-handling/m-p/58069#M833</guid>
      <dc:creator>rami99</dc:creator>
      <dc:date>2023-03-25T20:10:52Z</dc:date>
    </item>
    <item>
      <title>Re: ChatGPT data handling</title>
      <link>https://community.isc2.org/t5/Governance-Risk-Compliance/ChatGPT-data-handling/m-p/58084#M835</link>
      <description>&lt;P&gt;Until you've found a way of assessing the risk posed, you could block access via your corporate proxy.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Mon, 27 Mar 2023 07:58:46 GMT</pubDate>
      <guid>https://community.isc2.org/t5/Governance-Risk-Compliance/ChatGPT-data-handling/m-p/58084#M835</guid>
      <dc:creator>Steve-Wilme</dc:creator>
      <dc:date>2023-03-27T07:58:46Z</dc:date>
    </item>
    <item>
      <title>Re: ChatGPT data handling</title>
      <link>https://community.isc2.org/t5/Governance-Risk-Compliance/ChatGPT-data-handling/m-p/58251#M839</link>
      <description>&lt;P&gt;We recently had a blog post on the topic of ChatGPT.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;EM&gt;Nobody predicted how rapidly AI chatbots would change perceptions of what is possible. Some worry how it might improve phishing attacks. More likely, experts think, will be its effect on targeting. Click to read more on the blog:&amp;nbsp;&lt;/EM&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;A href="https://blog.isc2.org/isc2_blog/2023/03/analysis-will-chatgpts-perfect-english-change-the-game-for-phishing-attacks.html" target="_blank" rel="noopener"&gt;&lt;FONT size="4"&gt;ANALYSIS: WILL CHATGPT’S PERFECT ENGLISH CHANGE THE GAME FOR PHISHING ATTACKS?&lt;/FONT&gt;&lt;/A&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;What are your thoughts?&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Mon, 03 Apr 2023 18:38:06 GMT</pubDate>
      <guid>https://community.isc2.org/t5/Governance-Risk-Compliance/ChatGPT-data-handling/m-p/58251#M839</guid>
      <dc:creator>AndreaMoore</dc:creator>
      <dc:date>2023-04-03T18:38:06Z</dc:date>
    </item>
    <item>
      <title>Re: ChatGPT data handling</title>
      <link>https://community.isc2.org/t5/Governance-Risk-Compliance/ChatGPT-data-handling/m-p/58328#M845</link>
      <description>&lt;P&gt;The quality of the English is only one of a number of indicators that an email is phishy.&amp;nbsp; It's important to look for the other signs, such as, Are you a named addressee?&amp;nbsp; Does it&amp;nbsp;prompt to act with urgency? Does it&amp;nbsp;attempt to mimic someone senior in your organisation?&amp;nbsp; Does it sound too good to be true?&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Thu, 06 Apr 2023 10:09:42 GMT</pubDate>
      <guid>https://community.isc2.org/t5/Governance-Risk-Compliance/ChatGPT-data-handling/m-p/58328#M845</guid>
      <dc:creator>Steve-Wilme</dc:creator>
      <dc:date>2023-04-06T10:09:42Z</dc:date>
    </item>
    <item>
      <title>Re: ChatGPT data handling</title>
      <link>https://community.isc2.org/t5/Governance-Risk-Compliance/ChatGPT-data-handling/m-p/58339#M848</link>
      <description>&lt;BLOCKQUOTE&gt;&lt;HR /&gt;&lt;a href="https://community.isc2.org/t5/user/viewprofilepage/user-id/783051913"&gt;@Steve-Wilme&lt;/a&gt;&amp;nbsp;wrote:&lt;BR /&gt;&lt;P&gt;The quality of the English is only one of a number of indicators that an email is phishy.&amp;nbsp; It's important to look for the other signs, such as, Are you a named addressee?&amp;nbsp; Does it&amp;nbsp;prompt to act with urgency? Does it&amp;nbsp;attempt to mimic someone senior in your organisation?&amp;nbsp; Does it sound too good to be true?&lt;/P&gt;&lt;HR /&gt;&lt;/BLOCKQUOTE&gt;&lt;P&gt;Back when WiFi became a thing (I know, dating myself), and people were freaking out about its impact on network security, I remember someone noting that if WiFi had you worried, you were probably living with a false sense of security. A lot of the talk about ChatGPT strikes me the same way: If you see it as some watershed threat, you've probably been more vulnerable than you realize. If misspellings are what your employees look for to tip them off to phishing, yikes! Have you seen what passes for "professional" communications these days?&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;I welcome the flaming on this statement, but defeating phishing can be solved, not by going forward, but by going back: Stop using HTML email! It doesn't do what most people think it does, but it certainly obfuscates links and creates the distraction that allows such scams to succeed. Now, I'll grant that in limited, intranet-level type communications, it may be OK to use. Regardless, ChatGPT advantage to the bad guys is that it is a longer lever for exerting leverage but not that it is a new category of threat. If AI/ChatGPT is the big bad wolf, then maybe it's finally time to tell grandma to get new glasses, lock her door, and stop letting strangers in the house - things she should have been doing years ago.&lt;/P&gt;</description>
      <pubDate>Thu, 06 Apr 2023 13:39:46 GMT</pubDate>
      <guid>https://community.isc2.org/t5/Governance-Risk-Compliance/ChatGPT-data-handling/m-p/58339#M848</guid>
      <dc:creator>JoePete</dc:creator>
      <dc:date>2023-04-06T13:39:46Z</dc:date>
    </item>
    <item>
      <title>Re: ChatGPT data handling</title>
      <link>https://community.isc2.org/t5/Governance-Risk-Compliance/ChatGPT-data-handling/m-p/58355#M849</link>
      <description>&lt;BLOCKQUOTE&gt;&lt;HR /&gt;&lt;a href="https://community.isc2.org/t5/user/viewprofilepage/user-id/1005241419"&gt;@JoePete&lt;/a&gt;&amp;nbsp;wrote:&lt;P class=""&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;I welcome the flaming on this statement, but defeating phishing can be solved, not by going forward, but by going back: Stop using HTML email!&lt;/P&gt;&lt;HR /&gt;&lt;/BLOCKQUOTE&gt;&lt;P&gt;Email phishing is just the latest evolution of a long-standing problem.&amp;nbsp;&lt;A href="https://en.wikipedia.org/wiki/List_of_con_artists" target="_blank" rel="noopener"&gt;Confidence men&lt;/A&gt;&amp;nbsp;have been an issue "forever". And official-looking and personalized phishing postal mail still arrives to this day.&amp;nbsp; Upgrading (or downgrading &lt;span class="lia-unicode-emoji" title=":smirking_face:"&gt;😏&lt;/span&gt; ) technology is just one more whac in the&amp;nbsp;&lt;A href="https://en.wikipedia.org/wiki/Whac-A-Mole" target="_blank" rel="noopener"&gt;Whac-A-Mole&lt;/A&gt;&amp;nbsp;game.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Truly "solving" phishing requires figuring out how to ensure people to treat others honestly, truthfully and with respect, which is way beyond "Security" (and maybe even mortals).&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;The only bit I see where IT (not just "security") can help is by throwing light on the &lt;A href="https://en.wikipedia.org/wiki/Tell_(poker)" target="_blank" rel="noopener"&gt;tells&lt;/A&gt;.&amp;nbsp; I agree that link obfuscation (or more generally,&amp;nbsp;&lt;A href="https://www.malwarebytes.com/blog/news/2017/10/out-of-character-homograph-attacks-explained" target="_blank" rel="noopener"&gt;homograph attacks&lt;/A&gt;) is the most likely area where we can contribute.&lt;/P&gt;</description>
      <pubDate>Thu, 06 Apr 2023 18:41:23 GMT</pubDate>
      <guid>https://community.isc2.org/t5/Governance-Risk-Compliance/ChatGPT-data-handling/m-p/58355#M849</guid>
      <dc:creator>denbesten</dc:creator>
      <dc:date>2023-04-06T18:41:23Z</dc:date>
    </item>
    <item>
      <title>Re: ChatGPT data handling</title>
      <link>https://community.isc2.org/t5/Governance-Risk-Compliance/ChatGPT-data-handling/m-p/58360#M851</link>
      <description>&lt;BLOCKQUOTE&gt;&lt;HR /&gt;&lt;a href="https://community.isc2.org/t5/user/viewprofilepage/user-id/311867713"&gt;@denbesten&lt;/a&gt;&amp;nbsp;wrote:&lt;P class=""&gt;&lt;SPAN&gt;Truly "solving" phishing requires figuring out how to ensure people to treat others honestly, truthfully and with respect, which is way beyond "Security" (and maybe even mortals).&lt;/SPAN&gt;&lt;/P&gt;&lt;HR /&gt;&lt;/BLOCKQUOTE&gt;&lt;P&gt;A variant of this topic is how schools in the US have adopted different curricula meant to teach kids how to spot "good" online sources or even phishing. Good intent, but in my (limited) experience, they do it all wrong. Back in my day, we learned the logical fallacies. Talk about time-tested reasoning. I am fairly convinced that Aristotle would never have been duped by a password reset scam. If anything, the approach I've seen in schools embraces a fallacy (e.g., appeal to authority), but so be it.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;The older I've gotten (or at least crankier), the younger the age range I look to as to where the problem is. At least in the US, I think schools have been overwhelmed by technology to the point where it detracts from teaching critical thinking and communication skills. Along with that, I would agree, that we've lost larger ideas. We've become very transactional (input for output). By they way, I knew you were going to call foul on me for my HTML ranting; if nothing else, they'll write on my tombstone "did not send HTML email," and it will be written in plaintext &lt;span class="lia-unicode-emoji" title=":winking_face:"&gt;😉&lt;/span&gt;&lt;/P&gt;</description>
      <pubDate>Fri, 07 Apr 2023 12:25:21 GMT</pubDate>
      <guid>https://community.isc2.org/t5/Governance-Risk-Compliance/ChatGPT-data-handling/m-p/58360#M851</guid>
      <dc:creator>JoePete</dc:creator>
      <dc:date>2023-04-07T12:25:21Z</dc:date>
    </item>
    <item>
      <title>Re: ChatGPT data handling</title>
      <link>https://community.isc2.org/t5/Governance-Risk-Compliance/ChatGPT-data-handling/m-p/58371#M853</link>
      <description>&lt;P&gt;Happy to oblige, but my true intention was to call out that social engineering isn't a technological issue in the first place.&amp;nbsp; Sure, tech makes it worse, and tech can help a bit, but in the end, it is a behavioral problem that as you point out, is best addressed by teaching critical thinking skills.&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Sat, 08 Apr 2023 04:01:31 GMT</pubDate>
      <guid>https://community.isc2.org/t5/Governance-Risk-Compliance/ChatGPT-data-handling/m-p/58371#M853</guid>
      <dc:creator>denbesten</dc:creator>
      <dc:date>2023-04-08T04:01:31Z</dc:date>
    </item>
    <item>
      <title>Re: ChatGPT data handling</title>
      <link>https://community.isc2.org/t5/Governance-Risk-Compliance/ChatGPT-data-handling/m-p/80158#M1307</link>
      <description>&lt;DIV class=""&gt;&lt;DIV class=""&gt;&lt;DIV class=""&gt;&lt;DIV class=""&gt;&lt;DIV class=""&gt;&lt;DIV class=""&gt;&lt;DIV class=""&gt;&lt;DIV class=""&gt;&lt;P class=""&gt;ChatGPT takes data handling seriously to ensure user privacy and security. It adheres to strict guidelines for data collection, processing, and storage, with a strong focus on user confidentiality. User interactions are securely processed, and any data is managed under strict privacy policies of &lt;A href="https://pk365.com.pk/" target="_blank" rel="noopener"&gt;PK365&lt;/A&gt;. Personal information shared during conversations is not stored or used beyond the session unless explicitly permitted by the user. OpenAI continuously improves data protection measures to maintain high standards of data security and user trust.&lt;/P&gt;&lt;/DIV&gt;&lt;/DIV&gt;&lt;/DIV&gt;&lt;/DIV&gt;&lt;DIV class=""&gt;&lt;DIV class=""&gt;&amp;nbsp;&lt;/DIV&gt;&lt;/DIV&gt;&lt;/DIV&gt;&lt;/DIV&gt;&lt;/DIV&gt;&lt;/DIV&gt;</description>
      <pubDate>Sun, 11 May 2025 02:56:13 GMT</pubDate>
      <guid>https://community.isc2.org/t5/Governance-Risk-Compliance/ChatGPT-data-handling/m-p/80158#M1307</guid>
      <dc:creator>Martinjoe23423</dc:creator>
      <dc:date>2025-05-11T02:56:13Z</dc:date>
    </item>
    <item>
      <title>Re: ChatGPT data handling</title>
      <link>https://community.isc2.org/t5/Governance-Risk-Compliance/ChatGPT-data-handling/m-p/85806#M1429</link>
      <description>You can block access through your corporate proxy until you have a proper way to assess the risk.</description>
      <pubDate>Sun, 23 Nov 2025 10:57:46 GMT</pubDate>
      <guid>https://community.isc2.org/t5/Governance-Risk-Compliance/ChatGPT-data-handling/m-p/85806#M1429</guid>
      <dc:creator>mrsimon0007</dc:creator>
      <dc:date>2025-11-23T10:57:46Z</dc:date>
    </item>
    <item>
      <title>Re: ChatGPT data handling</title>
      <link>https://community.isc2.org/t5/Governance-Risk-Compliance/ChatGPT-data-handling/m-p/86357#M1439</link>
      <description>&lt;P&gt;Hi all,&lt;/P&gt;&lt;P&gt;i think blocking KI tool (ChatGPT, Copilot etc) is not the way to handle this topic. I ´m not familiar with all the technical possibilities to control the use of KI.&lt;/P&gt;&lt;P&gt;I tried to list some points to use KI.&amp;nbsp;&lt;A href="https://www.cybersecurity-luerssen.com/en/post/ki-meets-regulation" target="_blank"&gt;https://www.cybersecurity-luerssen.com/en/post/ki-meets-regulation&lt;/A&gt;&lt;/P&gt;&lt;P&gt;And one important point is the ethical guideline which should be guided everyone in the company.&amp;nbsp;&lt;/P&gt;&lt;P&gt;From my perspective there is not the all in answer, there is a way of learning how to use the KI and handle the data in a responsible way.&lt;/P&gt;&lt;P&gt;I looking forward to this discussion&lt;/P&gt;</description>
      <pubDate>Fri, 12 Dec 2025 08:36:51 GMT</pubDate>
      <guid>https://community.isc2.org/t5/Governance-Risk-Compliance/ChatGPT-data-handling/m-p/86357#M1439</guid>
      <dc:creator>OliLue</dc:creator>
      <dc:date>2025-12-12T08:36:51Z</dc:date>
    </item>
    <item>
      <title>Re: ChatGPT data handling</title>
      <link>https://community.isc2.org/t5/Governance-Risk-Compliance/ChatGPT-data-handling/m-p/86658#M1443</link>
      <description>Generative AI systems can raise concerns around how user inputs are stored, processed, and potentially used for model improvement. Risks include unintended data retention, exposure of sensitive information, and lack of transparency about storage and access. Effective risk management involves limiting sensitive inputs, strong data governance, clear retention policies, encryption, access controls, and user awareness about how data is handled.</description>
      <pubDate>Sun, 21 Dec 2025 11:04:06 GMT</pubDate>
      <guid>https://community.isc2.org/t5/Governance-Risk-Compliance/ChatGPT-data-handling/m-p/86658#M1443</guid>
      <dc:creator>mrsimon0007</dc:creator>
      <dc:date>2025-12-21T11:04:06Z</dc:date>
    </item>
    <item>
      <title>Re: ChatGPT data handling</title>
      <link>https://community.isc2.org/t5/Governance-Risk-Compliance/ChatGPT-data-handling/m-p/87269#M1452</link>
      <description>&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Thanks for the info&lt;/P&gt;</description>
      <pubDate>Sun, 18 Jan 2026 21:02:41 GMT</pubDate>
      <guid>https://community.isc2.org/t5/Governance-Risk-Compliance/ChatGPT-data-handling/m-p/87269#M1452</guid>
      <dc:creator>mfak1122</dc:creator>
      <dc:date>2026-01-18T21:02:41Z</dc:date>
    </item>
    <item>
      <title>Re: ChatGPT data handling</title>
      <link>https://community.isc2.org/t5/Governance-Risk-Compliance/ChatGPT-data-handling/m-p/87541#M1454</link>
      <description>&lt;P&gt;&lt;a href="https://community.isc2.org/t5/user/viewprofilepage/user-id/338170793"&gt;@mfak1122&lt;/a&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Having worked within IBM for 23 years, one of the key issues that came up is their philosophy towards AI, I suggest you research and adopt their approach:&amp;nbsp; &amp;nbsp;This is a great starting point.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;DIV class=""&gt;&lt;DIV&gt;&lt;DIV class=""&gt;&lt;DIV&gt;IBM's perspective on AI governance is&amp;nbsp;centered on creating &lt;STRONG&gt;trustworthy, transparent, and explainable AI&lt;/STRONG&gt; that augments human intelligence rather than replacing it. IBM advocates for a risk-based,, collaborative approach to regulation that focuses on high-risk applications, avoids restrictive licensing regimes, and supports an open-source innovation ecosystem.&lt;SPAN class=""&gt;&lt;SPAN class=""&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/DIV&gt;&lt;/DIV&gt;&lt;DIV class=""&gt;Key pillars of IBM's AI governance strategy include:&lt;/DIV&gt;&lt;DIV class=""&gt;&amp;nbsp;&lt;/DIV&gt;&lt;DIV class=""&gt;1. Core Principles for Trustworthy AI&lt;/DIV&gt;&lt;DIV class=""&gt;IBM defines Trustworthy AI through five fundamental pillars:&lt;SPAN class=""&gt;&lt;SPAN class=""&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/DIV&gt;&lt;UL class=""&gt;&lt;LI&gt;&lt;SPAN class=""&gt;&lt;STRONG&gt;Transparency:&lt;/STRONG&gt; Disclosing how AI systems are designed, developed, and trained.&lt;/SPAN&gt;&lt;/LI&gt;&lt;LI&gt;&lt;SPAN class=""&gt;&lt;STRONG&gt;Explainability:&lt;/STRONG&gt; Ensuring AI-driven decisions can be interpreted and understood by humans.&lt;/SPAN&gt;&lt;/LI&gt;&lt;LI&gt;&lt;SPAN class=""&gt;&lt;STRONG&gt;Fairness:&lt;/STRONG&gt; Actively managing and reducing bias to ensure equitable treatment.&lt;/SPAN&gt;&lt;/LI&gt;&lt;LI&gt;&lt;SPAN class=""&gt;&lt;STRONG&gt;Robustness:&lt;/STRONG&gt; Enabling AI to handle unexpected conditions and resist technical or adversarial attacks.&lt;/SPAN&gt;&lt;/LI&gt;&lt;LI&gt;&lt;SPAN class=""&gt;&lt;STRONG&gt;Privacy:&lt;/STRONG&gt; Safeguarding consumer data and maintaining data rights.&lt;/SPAN&gt;&lt;SPAN class=""&gt;&lt;SPAN class=""&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/LI&gt;&lt;/UL&gt;&lt;DIV class=""&gt;&amp;nbsp;&lt;/DIV&gt;&lt;DIV class=""&gt;2. The "Augmentation" Philosophy&lt;/DIV&gt;&lt;DIV class=""&gt;&lt;SPAN class=""&gt;&lt;U&gt;&lt;A class="" href="https://www.ibm.com/new/announcements/governing-ai-with-confidence-our-journey-with-watsonx-governance" target="_blank" rel="noopener"&gt;IBM&lt;/A&gt;&lt;/U&gt; believes that AI is intended to augment, not replace, human intelligence&lt;/SPAN&gt;. Governance should ensure that AI acts as a tool to enhance human capabilities, with humans remaining in the loop for critical decision-making.&lt;SPAN class=""&gt;&lt;SPAN class=""&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/DIV&gt;&lt;DIV class=""&gt;&amp;nbsp;&lt;/DIV&gt;&lt;DIV class=""&gt;3. Regulatory and Policy Perspective&lt;/DIV&gt;&lt;UL class=""&gt;&lt;LI&gt;&lt;SPAN class=""&gt;&lt;STRONG&gt;Regulate Risk, Not Algorithms:&lt;/STRONG&gt; IBM argues against licensing regimes that could hinder innovation, advocating instead for regulating the &lt;EM&gt;context&lt;/EM&gt; and &lt;EM&gt;use&lt;/EM&gt; of AI, particularly high-risk scenarios.&lt;/SPAN&gt;&lt;/LI&gt;&lt;LI&gt;&lt;SPAN class=""&gt;&lt;STRONG&gt;Support for the EU AI Act:&lt;/STRONG&gt; IBM welcomes the risk-based approach of the EU AI Act.&lt;/SPAN&gt;&lt;/LI&gt;&lt;LI&gt;&lt;SPAN class=""&gt;&lt;STRONG&gt;Data Provenance:&lt;/STRONG&gt; IBM emphasises that trustworthy data is the foundation of AI and supports industry-wide data provenance standards to track data origin.&lt;/SPAN&gt;&lt;SPAN class=""&gt;&lt;SPAN class=""&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/LI&gt;&lt;/UL&gt;&lt;DIV class=""&gt;&amp;nbsp;&lt;/DIV&gt;&lt;DIV class=""&gt;4. Operationalising Governance (watsonx.governance)&lt;SPAN class=""&gt;&lt;SPAN class=""&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/DIV&gt;&lt;DIV class=""&gt;IBM emphasises that governance must move from theoretical principles to practical, automated application across the AI lifecycle.&lt;SPAN class=""&gt;&lt;SPAN class=""&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/DIV&gt;&lt;UL class=""&gt;&lt;LI&gt;&lt;SPAN class=""&gt;&lt;STRONG&gt;watsonx.governance:&lt;/STRONG&gt; A, AI-powered toolkit designed to help organizations monitor, audit, and manage AI models for compliance, bias, and drift.&lt;/SPAN&gt;&lt;/LI&gt;&lt;LI&gt;&lt;SPAN class=""&gt;&lt;STRONG&gt;Internal Governance Structure:&lt;/STRONG&gt; IBM utilises the "Responsible Technology Board" (formerly AI Ethics Board) to review AI use cases, supported by an Advocacy Network and Policy Advisory Committee.&lt;/SPAN&gt;&lt;SPAN class=""&gt;&lt;SPAN class=""&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/LI&gt;&lt;/UL&gt;&lt;DIV class=""&gt;&amp;nbsp;&lt;/DIV&gt;&lt;DIV class=""&gt;5. Commitment to Openness&lt;/DIV&gt;&lt;DIV class=""&gt;IBM believes that an open innovation ecosystem is critical for safe, diverse, and rapid AI development. Examples include co-founding The AI Alliance, releasing the Granite models into open source, and collaborating on projects like InstructLab or Qiskit.&lt;/DIV&gt;&lt;DIV class=""&gt;IBM's approach to AI governance treats it not as a regulatory burden but as a &lt;STRONG&gt;business enabler&lt;/STRONG&gt; that increases confidence in AI, boosts ROI, and strengthens reputation.&lt;SPAN class=""&gt;&lt;SPAN class=""&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/DIV&gt;&lt;DIV class=""&gt;&amp;nbsp;&lt;/DIV&gt;&lt;DIV class=""&gt;Ensure the organisation has an agreed AI Governance and associated strategy, always use the "Enterprise" version and do not allow employees to use the "free" AI models from within the organisation.&amp;nbsp; It will only end up in court cases, data leakage and possible loss of company IP, which has been built up over a long period of time.&amp;nbsp; Above all educate all employees, why you have taken these steps i.e., to protect the organisation, the individuals and to ensure productivity enhancements and efficiencies.&amp;nbsp; &amp;nbsp;&lt;/DIV&gt;&lt;DIV class=""&gt;&amp;nbsp;&lt;/DIV&gt;&lt;DIV class=""&gt;Regards&lt;/DIV&gt;&lt;DIV class=""&gt;&amp;nbsp;&lt;/DIV&gt;&lt;DIV class=""&gt;Caute_Cautim&lt;/DIV&gt;&lt;/DIV&gt;&lt;/DIV&gt;&lt;DIV class=""&gt;&lt;DIV&gt;&amp;nbsp;&lt;/DIV&gt;&lt;/DIV&gt;</description>
      <pubDate>Wed, 28 Jan 2026 02:53:08 GMT</pubDate>
      <guid>https://community.isc2.org/t5/Governance-Risk-Compliance/ChatGPT-data-handling/m-p/87541#M1454</guid>
      <dc:creator>Caute_cautim</dc:creator>
      <dc:date>2026-01-28T02:53:08Z</dc:date>
    </item>
    <item>
      <title>Re: ChatGPT data handling</title>
      <link>https://community.isc2.org/t5/Governance-Risk-Compliance/ChatGPT-data-handling/m-p/87630#M1455</link>
      <description>&lt;P&gt;The risks around generative AI like ChatGPT often come down to data governance, privacy, and model training assumptions. From a risk management perspective, one key consideration is understanding what data is logged, how long it’s retained, and how it’s used in training or inference, especially in enterprise environments where sensitive information may be input into AI systems. Proper policies should define clear boundaries for input sanitization, data classification, and logging practices, and integrate those into existing governance and compliance frameworks so that AI usage doesn’t inadvertently expose confidential data or violate privacy requirements. It’s also important for organizations to conduct periodic risk assessments of any AI service they integrate, updating controls as models evolve and regulatory guidance matures.&lt;/P&gt;</description>
      <pubDate>Sat, 31 Jan 2026 10:48:59 GMT</pubDate>
      <guid>https://community.isc2.org/t5/Governance-Risk-Compliance/ChatGPT-data-handling/m-p/87630#M1455</guid>
      <dc:creator>pamelat</dc:creator>
      <dc:date>2026-01-31T10:48:59Z</dc:date>
    </item>
  </channel>
</rss>

