cancel
Showing results for 
Show  only  | Search instead for 
Did you mean: 
cancel
Showing results for 
Show  only  | Search instead for 
Did you mean: 
Caute_cautim
Community Champion

OpenAi admits State Sponsored Threat Actors using ChatGPT actively

Hi All.

 

OpenAI has disclosed that cyber-criminals are exploiting its ChatGPT AI model to develop malware and carry out cyberattacks.

In a recent report, OpenAI outlined more than 20 incidents since early 2024 where attackers attempted to misuse ChatGPT for harmful activities.

 

The report, titled “Influence and Cyber Operations: An Update,” indicates that state-sponsored hacking groups, particularly from countries like China and Iran, have been using ChatGPT to bolster their cyberattack capabilities. These malicious activities include refining malware code, creating content for phishing schemes, and spreading disinformation on social media.

 

 

The report reveals two notable cyberattacks involving the use of the generative AI tool, ChatGPT.

 

https://www.linkedin.com/pulse/state-sponsored-threat-actors-using-chatgpt-cyber-gvw8e/?trackingId=x...

 

Has anyone identified the kill switch?

 

Regards

 

Caute

 

 

3 Replies
funkychicken
Contributor I

I knew this was going to be a bad thing from day one, much like the internet whereby no policy is in place from day 1 with a new service and people want to abuse it and take advantage of it, ultimately for financial gain. I know openAI is supposed to be "open" but I there needs to be some kind of policy of usage, and any deviation from that needs to be enforced with some kind of disciplinary if its used to release something on the internet for malicious purposes. 

 

I don't see this as an easy task, due to the openness of the whole thing, signing code for usage is fine for internal use but adopting this for public use is going to be a massive upheaval and was never intended to run as that from day 1 anyway. If openai has some kind of privacy mechanism to tie that code to a specific user, and a log of all code that was executed was logged and tracked and anything which is found to be "not fit for purpose" on a central server could flag that code up for review and if it goes against a disclaimer then the person or organisation could be fined. But its supposed to be open, and to do something like this we will have to be getting into the realms of checking for ID for banks and financial institutions.

 

I cant think of anything that will stop someone checking code with openai, copying that code and using it for a bad act. Disclaimers can always exist but it does not stop someone copying the code into a program. The solution to stop this is change openai to closed ai but that goes against the whole idea of openness. 

Caute_cautim
Community Champion

@funkychicken   Would you prefer openness or disclosure after the fact?

 

It is going to get far worst before anything is done about this - where exactly is that Kill Switch?

 

Regards

 

Caute_Cautim

funkychicken
Contributor I

I think I would prefer open with with identity disclosure. For example, if you write something it has to be tracked back to you. So if you want to write a piece of software, write a poem, generate some images then there needs to be some copyright on it. Although that isn't going to solve the problem because this is still happening in our daily lives such as pirated software, movies and music. 

 

In terms of the kill switch AI is embedded nearly everywhere now so its going to be difficult to shut it down. If you take ChatGPT, or even the modules like SciPy, PyTorch, Pandas, Polars and various others shutting these down can't be done because companies will host them on their private systems. 

 

The only way you can really shut this down is to make it illegal to use it, but where it is embedded now, even as simple as a basic process flow model for an online "Customer Support" site, will require a total rewrite of everything that we know. 

 

I don't really know the answer to this, but I hope someone has some moral high-ground with some level of control to determine now this is going to be used in the future. Its nice AI can re-write novels, but not nice that governments are using to attack consumers to steal their data and make financial gain from this.