cancel
Showing results for 
Show  only  | Search instead for 
Did you mean: 
cancel
Showing results for 
Show  only  | Search instead for 
Did you mean: 
rami99
Newcomer I

ChatGPT data handling

What do you think are the risks of generative AIs such as ChatGPT with regards to data storing (user inputs/prompts), data handling e.g ChatGPT storing user inputs / output (where and how and is it concerning?) and how would you go about risk management. 

7 Replies
Steve-Wilme
Advocate II

Until you've found a way of assessing the risk posed, you could block access via your corporate proxy.

 

-----------------------------------------------------------
Steve Wilme CISSP-ISSAP, ISSMP MCIIS
AndreaMoore
Community Manager

We recently had a blog post on the topic of ChatGPT.

 

Nobody predicted how rapidly AI chatbots would change perceptions of what is possible. Some worry how it might improve phishing attacks. More likely, experts think, will be its effect on targeting. Click to read more on the blog: 

 

ANALYSIS: WILL CHATGPT’S PERFECT ENGLISH CHANGE THE GAME FOR PHISHING ATTACKS?

 

What are your thoughts?

 




ISC2 Community Manager
Steve-Wilme
Advocate II

The quality of the English is only one of a number of indicators that an email is phishy.  It's important to look for the other signs, such as, Are you a named addressee?  Does it prompt to act with urgency? Does it attempt to mimic someone senior in your organisation?  Does it sound too good to be true?

 

-----------------------------------------------------------
Steve Wilme CISSP-ISSAP, ISSMP MCIIS
JoePete
Advocate I


@Steve-Wilme wrote:

The quality of the English is only one of a number of indicators that an email is phishy.  It's important to look for the other signs, such as, Are you a named addressee?  Does it prompt to act with urgency? Does it attempt to mimic someone senior in your organisation?  Does it sound too good to be true?


Back when WiFi became a thing (I know, dating myself), and people were freaking out about its impact on network security, I remember someone noting that if WiFi had you worried, you were probably living with a false sense of security. A lot of the talk about ChatGPT strikes me the same way: If you see it as some watershed threat, you've probably been more vulnerable than you realize. If misspellings are what your employees look for to tip them off to phishing, yikes! Have you seen what passes for "professional" communications these days?

 

I welcome the flaming on this statement, but defeating phishing can be solved, not by going forward, but by going back: Stop using HTML email! It doesn't do what most people think it does, but it certainly obfuscates links and creates the distraction that allows such scams to succeed. Now, I'll grant that in limited, intranet-level type communications, it may be OK to use. Regardless, ChatGPT advantage to the bad guys is that it is a longer lever for exerting leverage but not that it is a new category of threat. If AI/ChatGPT is the big bad wolf, then maybe it's finally time to tell grandma to get new glasses, lock her door, and stop letting strangers in the house - things she should have been doing years ago.

denbesten
Community Champion


@JoePete wrote:

 

I welcome the flaming on this statement, but defeating phishing can be solved, not by going forward, but by going back: Stop using HTML email!


Email phishing is just the latest evolution of a long-standing problem. Confidence men have been an issue "forever". And official-looking and personalized phishing postal mail still arrives to this day.  Upgrading (or downgrading 😏 ) technology is just one more whac in the Whac-A-Mole game.

 

Truly "solving" phishing requires figuring out how to ensure people to treat others honestly, truthfully and with respect, which is way beyond "Security" (and maybe even mortals).

 

The only bit I see where IT (not just "security") can help is by throwing light on the tells.  I agree that link obfuscation (or more generally, homograph attacks) is the most likely area where we can contribute.

JoePete
Advocate I


@denbesten wrote:

Truly "solving" phishing requires figuring out how to ensure people to treat others honestly, truthfully and with respect, which is way beyond "Security" (and maybe even mortals).


A variant of this topic is how schools in the US have adopted different curricula meant to teach kids how to spot "good" online sources or even phishing. Good intent, but in my (limited) experience, they do it all wrong. Back in my day, we learned the logical fallacies. Talk about time-tested reasoning. I am fairly convinced that Aristotle would never have been duped by a password reset scam. If anything, the approach I've seen in schools embraces a fallacy (e.g., appeal to authority), but so be it.

 

The older I've gotten (or at least crankier), the younger the age range I look to as to where the problem is. At least in the US, I think schools have been overwhelmed by technology to the point where it detracts from teaching critical thinking and communication skills. Along with that, I would agree, that we've lost larger ideas. We've become very transactional (input for output). By they way, I knew you were going to call foul on me for my HTML ranting; if nothing else, they'll write on my tombstone "did not send HTML email," and it will be written in plaintext 😉

denbesten
Community Champion

Happy to oblige, but my true intention was to call out that social engineering isn't a technological issue in the first place.  Sure, tech makes it worse, and tech can help a bit, but in the end, it is a behavioral problem that as you point out, is best addressed by teaching critical thinking skills.