Generative AI is unprecedentedly popular, but its vulnerabilities are not yet adequately highlighted. Recent research by Rezillion indicates that there are still many vulnerabilities in most (public) generative AI initiatives.
Research by supply chain security specialist Rezillion among the 50 most popular (open-source) generative AI models shows that they still have some considerable security risks. These include security risks in areas such as trust boundary, data management, risks in models and general security risks.
AI models and solutions are often open
Among other things, it has been discovered that many AI models provide extensive access and authorization without sufficient security measures. When combined with the lack of mature and basic security in the open source projects that often use these solutions, this can easily lead to breaches.
The real cause, according to Rezilion, really lies in that the underlying AI models and solutions have a poor security posture. On average, popular generative AI models and solutions score 4.6 out of 10 on the so-called OSSF Scorecard. The most popular solution, Auto-GPT even scores a 3.7.
Recommendations
Naturally, Rezilion also comes up with recommendations for the safe use and deployment of generative AI models and solutions. These recommendations include training teams to recognize the risks, closely monitoring security risks from LLMs and the open source ecosystems.
In addition, the researchers suggest implementing robust security practices and establishing more security awareness.
Also read: ‘Employees are embracing AI, but lack security skills’