cancel
Showing results for 
Show  only  | Search instead for 
Did you mean: 
cancel
Showing results for 
Show  only  | Search instead for 
Did you mean: 
Caute_cautim
Community Champion

AI assistance may lead to more security mistakes

Hi All

 

Well it had to come, with people wrapped up in AI and ChatGPT:  Just who, what, where do you trust?


We know lawyers have already fallen foul of ChatGPT when "operating under a misconception" that AI responses are fully verified and can be trusted for building a legal case, ignoring the potential for hallucination.

What about similar security impacts? New research from Stanford shows that new developers also overly trusted an AI code assistant, preferring the learning experience over being insulted on StackOverflow. The result?

"inexperienced developers may be inclined to readily trust an AI assistant’s output, at the risk of introducing new security vulnerabilities"

"AI assistants have the potential to decrease user pro-activeness to carefully search for API and safe implement details in library documentation directly."

 

Has anyone else had this experience as yet or are they still experimenting?

 

Regards

 

Caute_Cautim

 

 

2 Replies
ericgeater
Community Champion

Some years ago, I stumbled upon the Ten Immutable Laws of Security, and have always been transfixed by the repeated phrase, "... it is no longer your computer."

 

I'd like to add, "If you submit a secret to a third party, it is only as secure as your trust with that third party."

 

Which is to say, it's unfathomable to know anyone would willfully offer up any secret (be it intellectual property, a trade secret, secure code, or a secret recipe for fried chicken) to anyone outside of their realm if the (insert your secret here) is truly worth protection.

-----------
A claim is as good as its veracity.
Caute_cautim
Community Champion

@ericgeater   A very good one to add to the original list. 

 

Well done.

 

Regards

 

Caute_Cautim