AI assistants make mistakes just like humans, and they may require extra care and attention. Prompt Security aims to give organizations a safety net to automatically check AI-generated code for vulnerabilities.
GitHub recently made a free version of Copilot available, allowing any programmer with internet access to access an AI coding tool. In addition to greater convenience, the tool now democratizes more risk in the process.
External LLMs now have free rein to provide code within organizations if developers are not vigilant. Prompt Security does not want to leave a secure production environment to chance.
Prompt Security’s enhanced solution
To address these challenges, Prompt Security has announced an upgrade to its proprietary platform for GitHub Copilot and other AI code assistants. The enhanced solution is said to be stronger against data breaches, find vulnerable code faster and provide greater visibility into AI’s impact on codebases.
Some key new features:
– Inventory of all AI tools used, including developers using either GitHub Copilot Enterprise or the free version
– Enhanced real-time code cleaning and editing capabilities
– Analysis of GitHub Copilot responses, with blocking of potentially dangerous or vulnerable generated code
– Support for GitHub Copilot Free License, Amazon Q and GitLab Duo, among others
Balancing security and productivity
The solution is designed to help organizations maintain compliance and security requirements while still reaping the benefits of AI code assistants. According to previous research, these tools can increase developer productivity by as much as 55 percent. However, this is not a one-size-fits-all view, as AI programming assistance also creates headaches.
Itamar Golan, CEO and co-founder of Prompt Security, stresses the importance of prioritizing security without compromising productivity. This is ultimately the balance that organizations will have to look for, especially if cutting costs is necessary to remain competitive.
Consequently, there is no single framework for safe AI use. As a result, parties like Prompt must close the loopholes so that there is not an explosion of vulnerable code over time. Should tech companies make further cuts to their programming teams, as predicted by Meta and others, this risk is only going to increase.