What do you think are the risks of generative AIs such as ChatGPT with regards to data storing (user inputs/prompts), data handling e.g ChatGPT storing user inputs / output (where and how and is it concerning?) and how would you go about risk management.
Hi all,
i think blocking KI tool (ChatGPT, Copilot etc) is not the way to handle this topic. I ´m not familiar with all the technical possibilities to control the use of KI.
I tried to list some points to use KI. https://www.cybersecurity-luerssen.com/en/post/ki-meets-regulation
And one important point is the ethical guideline which should be guided everyone in the company.
From my perspective there is not the all in answer, there is a way of learning how to use the KI and handle the data in a responsible way.
I looking forward to this discussion
Thanks for the info
Having worked within IBM for 23 years, one of the key issues that came up is their philosophy towards AI, I suggest you research and adopt their approach: This is a great starting point.
The risks around generative AI like ChatGPT often come down to data governance, privacy, and model training assumptions. From a risk management perspective, one key consideration is understanding what data is logged, how long it’s retained, and how it’s used in training or inference, especially in enterprise environments where sensitive information may be input into AI systems. Proper policies should define clear boundaries for input sanitization, data classification, and logging practices, and integrate those into existing governance and compliance frameworks so that AI usage doesn’t inadvertently expose confidential data or violate privacy requirements. It’s also important for organizations to conduct periodic risk assessments of any AI service they integrate, updating controls as models evolve and regulatory guidance matures.