Any one considered the issue that employees paste confidential company data into ChatGPT?
A security policy amendment and awareness?
How much will the tool pick up up from Microsoft documents, spreadsheets, or presentations etc, which someone pastes in from the organisation?
I served an employee who didn't mind using a free online PDF maker, because "hey, it's free." He never thought about all the confidential data that website was slurping up.
ChatGPT is just another hole in the ground that we whisper our secrets into. The reeds will grow out of that hole and whistle our stories when the wind blows.
This article seems like non-sense to me! ChatGPT has been questioned for the simple fact that they will not release any information on what data set were used to train its models, so saying the information is public is simple not true. Then claiming it learns from user entered data is always way off in that as mentioned these models are training using data sets, not user provided data. If the user inputed data is being recorded for other reasons and usage that would defiantly be possible and would need to be stated in the terms of service. It has also been stated that ChatGPT is not pulling data from the internet and the datasets are a few years old so asking about current event will not get proper results.
Some places just put things out to hear themselves talk I guess...
ChatGPT has been questioned for the simple fact that they will not release any information on what data set were used to train its models,
Exactly, the challenge with AI is normalization. If you feed a model apples, it will be excellent at identifying apples. But if you feed it fruit salad, you might end up with a strawberry. By the same token, however, if you have tool that has been designed to identify apples, don't go using it on a bunch of bananas. I'm not so much worried about the data used to train ChatGPT (although, knowing more about what's under the hood wouldn't hurt). I am more worried about the unintended consequences of using it in unintended ways.
Any one considered the issue that employees paste confidential company data into ChatGPT? A security policy amendment and awareness?
Policies/procedures/work-instructions should already be whitelisting the authorized methods for processing confidential data so an update seems likely unless one intended to allow ChatGPT as an authorized processor. That said, nothing wrong with a quick validation.
Awareness campaigns on the other hand are typically improved by including contemporary examples, But I do believe translators and instant messengers are more plausible examples.
Does that include spreadsheets, images all of which ChatGPT can now consume, as well as text or words etc?
Yes, I agree, whitelisting helps, but it is easy enough to cut and paste information directly, unless you have a good DLP solution in place to prevent this from occurring.
Yes, good data has to be consumed by the AI, which has to be normalised and checked for bias, and cleaned as necessary to remain objective.
Over time a lot of data, will be collected from a lot of different sources, hopefully ongoing analysis is used to clear out information, which is obviously incorrect or politically motivated etc.
My guess is that ChatGPT will be the final nail in the coffin for (ISC)² online exams.
Oh, it was way sooner than ChatGPT.
(ISC)² Concludes Online Proctored Exams Do Not Meet Exam Security Standards - (ISC)² Blog (isc2.org)
The blog left the door cracked, concluding "may initiate additional pilots as technology ...evolve". Well, technology did evolve. I'm betting headlines such as these just slammed the door shut.
ChatGPT passes exams from law and business schools
Sam Altman ... says 'can pass a bar exam and score a 5 on several AP exams'
Bar exam score shows AI can keep up with 'human lawyers,' researchers say
Used to be that cheating required a covert channel to a SmartFriend™. ChatGPT eliminates the necessity that the friend be smart.
It is interesting some people deem the amount of information passed to ChatGPT as "training data".
Yet the attached image, says otherwise. For example out of 100,000 incidents, 199 were related to the release of sensitive information,