Interesting article about ChatGPT and some of the privacy issues related to it.
https://www.techradar.com/news/samsung-workers-leaked-company-secrets-by-using-chatgpt
I strongly suggest that folks should think carefully before using this software and most likely block its usage.
d
I find it very misleading for the article to say that data was "leaked" in ChatGPT. To me a leak mean publicly available or a more proper statement would probably be that the data has been "cached" in ChatGPT. I have not seen any statements from OpenAI stating how inputed data will be used, so at this point is just seems like assumptions are being made. It's almost the same as saying anything that an assistant like Alex hears is leaked. Clearly what happens to the data need to be clearly defined but before it is maybe people should request info from OpenAI about it.
If I missed something please let me know...
John-
Every org has to consider writing (and updating!) policy on the misapplication of corporate data. It sounds as if some of their smartest people may have blithely used the tool as "peer review".
So what would have been the defense here? More DLP? Blocking ChatGPT? Security awareness training to remind people about data sensitivity?
@JKWiniger I agree that leaked is probably not the best terminology to be used, cached would have been better.
I believe (MHOO) that not enough is known by users, etc. as to the exact details of what is happening with their data. Realistically the same issue has been with us (Security folk) for years with the end user who emails a spreadsheet with corporate data to themselves or prints a copy to take home, etc. Just in this case the data, is available much faster as it has now been downloaded to an AI channel.
@ericgeater I personally would recommend all three of those actions (use DLP if you have it, continue to educated users on properly using/posting data and finally, I would block it from my corporate environment)
d