Hi All
Employees might recognize the potential leak of sensitive data as a top risk, but some individuals still proceed to input such information into publicly available generative artificial intelligence (AI) tools.
This sensitive data includes customer information, sales figures, financial data, and personally identifiable information, such as email addresses and phone numbers. Employees also lack clear policies or guidance on the use of these tools in the workplace, according to research released by Veritas Technologies.
Regards
Caute_Cautim
Yeah.... and it can be completely innocent as well. The integrations are showing up everywhere, zoom, etc. I know of an instance where someone who used otter.ai (free version) wasn't aware of the implications on multiple levels and was someone who was not one to always attend every meeting they were invited to and accepted. They had not given thought to the fact that it basically leaked a rolling part of the meeting to a shared public link (perhaps had not realized it) but more annoyingly attended the meetings they did not attend and crashed the meeting so to speak (no idea if it was intentional or not). Otter may have gotten rid of that, but it was a tease feature not so long ago... or maybe it is still there?
We just kicked it out of the meeting, but I am sure there are plenty of examples where that was not the case and sensitive corporate data was discussed. While not exposed necessarily, the potential existed. That it became part of some data set or model somewhere, most definitely.
There is an ongoing education campaign now at that work place asking people to stop and look and think before leaping....