cancel
Showing results for 
Show  only  | Search instead for 
Did you mean: 
cancel
Showing results for 
Show  only  | Search instead for 
Did you mean: 
Caute_cautim
Community Champion

OpenAi admits State Sponsored Threat Actors using ChatGPT actively

Hi All.

 

OpenAI has disclosed that cyber-criminals are exploiting its ChatGPT AI model to develop malware and carry out cyberattacks.

In a recent report, OpenAI outlined more than 20 incidents since early 2024 where attackers attempted to misuse ChatGPT for harmful activities.

 

The report, titled “Influence and Cyber Operations: An Update,” indicates that state-sponsored hacking groups, particularly from countries like China and Iran, have been using ChatGPT to bolster their cyberattack capabilities. These malicious activities include refining malware code, creating content for phishing schemes, and spreading disinformation on social media.

 

 

The report reveals two notable cyberattacks involving the use of the generative AI tool, ChatGPT.

 

https://www.linkedin.com/pulse/state-sponsored-threat-actors-using-chatgpt-cyber-gvw8e/?trackingId=x...

 

Has anyone identified the kill switch?

 

Regards

 

Caute

 

 

2 Replies
funkychicken
Contributor I

I knew this was going to be a bad thing from day one, much like the internet whereby no policy is in place from day 1 with a new service and people want to abuse it and take advantage of it, ultimately for financial gain. I know openAI is supposed to be "open" but I there needs to be some kind of policy of usage, and any deviation from that needs to be enforced with some kind of disciplinary if its used to release something on the internet for malicious purposes. 

 

I don't see this as an easy task, due to the openness of the whole thing, signing code for usage is fine for internal use but adopting this for public use is going to be a massive upheaval and was never intended to run as that from day 1 anyway. If openai has some kind of privacy mechanism to tie that code to a specific user, and a log of all code that was executed was logged and tracked and anything which is found to be "not fit for purpose" on a central server could flag that code up for review and if it goes against a disclaimer then the person or organisation could be fined. But its supposed to be open, and to do something like this we will have to be getting into the realms of checking for ID for banks and financial institutions.

 

I cant think of anything that will stop someone checking code with openai, copying that code and using it for a bad act. Disclaimers can always exist but it does not stop someone copying the code into a program. The solution to stop this is change openai to closed ai but that goes against the whole idea of openness. 

Caute_cautim
Community Champion

@funkychicken   Would you prefer openness or disclosure after the fact?

 

It is going to get far worst before anything is done about this - where exactly is that Kill Switch?

 

Regards

 

Caute_Cautim