Chainguard has announced a new service focused on securing agent skills, a rapidly growing component of modern software development. With the introduction of Agent Skills, the company aims to better manage the risks arising from the increasing use of AI agents in development processes.
Agent skills are small, modular instruction sets that determine what an AI agent can perform. Developers use them to add functionality, such as browser automation, document processing, or code generation. These skills are often shared via open platforms and registries, which accelerates adoption but also introduces new vulnerabilities.
According to Chainguard, there is currently insufficient control and security surrounding these components. Recent incidents show that malicious actors can relatively easily spread harmful agent skills that appear legitimate. In some cases, AI agents were used to install malware undetected, making them part of broader supply chain attacks.
Chainguard CEO and co-founder Dan Lorenc argues that this development is comparable to earlier phases of software distribution, where new artifacts quickly gained trust while security lagged behind. He points out that agent skills are evolving even faster and, therefore, introduce risks more rapidly. According to him, these skills are becoming an integral part of the software chain, making it necessary to secure them properly from the start.
Agent skills under continuous monitoring
Chainguard’s new service focuses on the automatic collection, analysis, and enhancement of agent skills. In this process, they are assessed against a set of security and quality rules, after which vulnerabilities are addressed before they become available to developers. The system also tracks changes, creating a verifiable history of modifications and assessments.
The company uses a continuous process in which agent skills are rechecked as soon as updates are released. This ensures the catalog remains compliant with current security standards. Developers can then easily integrate a verified skill without conducting extensive audits themselves.
IDC analyst Katie Norton emphasizes that the rise of AI agent ecosystems increases the software chain’s attack surface. She argues that agent skills are comparable to third-party software components and therefore require the same level of control and maintenance. Without structural validation and transparency, trust in AI-driven development could come under pressure.
With Agent Skills, Chainguard aims to establish a standard for the secure management of AI-related building blocks. The service is set to be expanded later to include broader rule sets and support for proprietary agent skills, enabling organizations to secure their own AI components according to the same principles.