1 min

Tags in this article

, ,

Researchers at Standford University report that the use of open-source AI assistants like GitHub Copilot tends to increase the number of vulnerabilities in code.

The study suggests that developers who use AI assistants are more likely to program vulnerabilities than developers who don’t. In addition, using AI assistants can increase the likelihood of faulty code going unnoticed.

False sense of security

The study found that while AI assistants improve developer productivity, developers studied less documentation, including guidelines on protecting code against vulnerabilities. The researchers indicate that inexperienced users of AI assistants may gain a false sense of security.

Positive note

Despite the risks found, Stanford University’s researchers are optimistic about AI-based software development tools. As long as developers actively assess code and tweak configurations, AI-based tools can help develop more secure solutions, the researchers note.

The researchers emphasize that developers should pay close attention when working with tools like GitHub Copilot — especially when developing important projects.

Tip: Developers sue GitHub Copilot for software piracy