Hi All
This study from December 2023 explores the impact of AI code assistants, like GitHub Copilot, on the security of code written by developers. The study suggests that users of AI code assistants write significantly less secure code than those without access to an assistant.
The study included 47 participants who performed five security-related programming tasks spanning three different programming languages (Python, JavaScript, and C).
Participants were randomly assigned to either a control group, which solved programming tasks without AI assistance, or an experiment group, which had access to an AI code assistant; 33 in the experiment group and 14 in the control group.
The study involved participants solving security-related programming tasks within a specially designed user interface, and all interactions, including AI queries, responses, and final code outputs, were logged for analysis.
The study found that users with access to AI assistants were more likely to introduce security vulnerabilities into their code, and paradoxically, they were also more likely to believe their insecure code was secure. Those who put more effort into crafting their prompts and adjusting parameters were more likely to generate secure solutions.
The study suggests:
- Refining user prompts can improve AI-generated code quality by fixing typos and incorporating security-specific language.
- Developing machine-learning methods to predict user intent and modify prompts can help safeguard against known vulnerabilities.
- Educating users on how to effectively interact with AI assistants and validate AI-generated code, with real-time documentation and flagging mechanisms in place in coding environments to mitigate security risks.
- Improving AI training data by using static analysis tools to filter out insecure code can significantly enhance the security of AI outputs.
- Enhancing AI interface design by making advanced settings more accessible and encouraging users to explore different outputs can improve the security and reliability of AI-generated code.
The authors conclude that AI code assistants can boost productivity but also pose security risks, especially for users unaware of potential issues.
To reduce these risks, it’s important to refine user interactions with AI, improve AI models, and educate users on secure coding practices. Future research should explore ways to further enhance the security of AI-generated code.
Link: Do Users Write More Insecure Code with AI Assistants?, 18 Dec 2023, https://lnkd.in/g5urJeSR
By Neil Perry Megha Srivastava Deepak Kumar Dan Boneh Stanford University
Regards
Caute_Cautim