cancel
Showing results for 
Show  only  | Search instead for 
Did you mean: 
cancel
Showing results for 
Show  only  | Search instead for 
Did you mean: 
rami99
Newcomer I

ChatGPT data handling

What do you think are the risks of generative AIs such as ChatGPT with regards to data storing (user inputs/prompts), data handling e.g ChatGPT storing user inputs / output (where and how and is it concerning?) and how would you go about risk management. 

14 Replies
OliLue
Newcomer III

Hi all,

i think blocking KI tool (ChatGPT, Copilot etc) is not the way to handle this topic. I ´m not familiar with all the technical possibilities to control the use of KI.

I tried to list some points to use KI. https://www.cybersecurity-luerssen.com/en/post/ki-meets-regulation

And one important point is the ethical guideline which should be guided everyone in the company. 

From my perspective there is not the all in answer, there is a way of learning how to use the KI and handle the data in a responsible way.

I looking forward to this discussion

mrsimon0007
Newcomer II

Generative AI systems can raise concerns around how user inputs are stored, processed, and potentially used for model improvement. Risks include unintended data retention, exposure of sensitive information, and lack of transparency about storage and access. Effective risk management involves limiting sensitive inputs, strong data governance, clear retention policies, encryption, access controls, and user awareness about how data is handled.
mfak1122
Viewer II

 

Thanks for the info

Caute_cautim
Community Champion

@mfak1122 

 

Having worked within IBM for 23 years, one of the key issues that came up is their philosophy towards AI, I suggest you research and adopt their approach:   This is a great starting point.

 

IBM's perspective on AI governance is centered on creating trustworthy, transparent, and explainable AI that augments human intelligence rather than replacing it. IBM advocates for a risk-based,, collaborative approach to regulation that focuses on high-risk applications, avoids restrictive licensing regimes, and supports an open-source innovation ecosystem. 
Key pillars of IBM's AI governance strategy include:
 
1. Core Principles for Trustworthy AI
IBM defines Trustworthy AI through five fundamental pillars: 
  • Transparency: Disclosing how AI systems are designed, developed, and trained.
  • Explainability: Ensuring AI-driven decisions can be interpreted and understood by humans.
  • Fairness: Actively managing and reducing bias to ensure equitable treatment.
  • Robustness: Enabling AI to handle unexpected conditions and resist technical or adversarial attacks.
  • Privacy: Safeguarding consumer data and maintaining data rights. 
 
2. The "Augmentation" Philosophy
IBM believes that AI is intended to augment, not replace, human intelligence. Governance should ensure that AI acts as a tool to enhance human capabilities, with humans remaining in the loop for critical decision-making. 
 
3. Regulatory and Policy Perspective
  • Regulate Risk, Not Algorithms: IBM argues against licensing regimes that could hinder innovation, advocating instead for regulating the context and use of AI, particularly high-risk scenarios.
  • Support for the EU AI Act: IBM welcomes the risk-based approach of the EU AI Act.
  • Data Provenance: IBM emphasises that trustworthy data is the foundation of AI and supports industry-wide data provenance standards to track data origin. 
 
4. Operationalising Governance (watsonx.governance) 
IBM emphasises that governance must move from theoretical principles to practical, automated application across the AI lifecycle. 
  • watsonx.governance: A, AI-powered toolkit designed to help organizations monitor, audit, and manage AI models for compliance, bias, and drift.
  • Internal Governance Structure: IBM utilises the "Responsible Technology Board" (formerly AI Ethics Board) to review AI use cases, supported by an Advocacy Network and Policy Advisory Committee. 
 
5. Commitment to Openness
IBM believes that an open innovation ecosystem is critical for safe, diverse, and rapid AI development. Examples include co-founding The AI Alliance, releasing the Granite models into open source, and collaborating on projects like InstructLab or Qiskit.
IBM's approach to AI governance treats it not as a regulatory burden but as a business enabler that increases confidence in AI, boosts ROI, and strengthens reputation. 
 
Ensure the organisation has an agreed AI Governance and associated strategy, always use the "Enterprise" version and do not allow employees to use the "free" AI models from within the organisation.  It will only end up in court cases, data leakage and possible loss of company IP, which has been built up over a long period of time.  Above all educate all employees, why you have taken these steps i.e., to protect the organisation, the individuals and to ensure productivity enhancements and efficiencies.   
 
Regards
 
Caute_Cautim
 
pamelat
Viewer II

The risks around generative AI like ChatGPT often come down to data governance, privacy, and model training assumptions. From a risk management perspective, one key consideration is understanding what data is logged, how long it’s retained, and how it’s used in training or inference, especially in enterprise environments where sensitive information may be input into AI systems. Proper policies should define clear boundaries for input sanitization, data classification, and logging practices, and integrate those into existing governance and compliance frameworks so that AI usage doesn’t inadvertently expose confidential data or violate privacy requirements. It’s also important for organizations to conduct periodic risk assessments of any AI service they integrate, updating controls as models evolve and regulatory guidance matures.