As software application development teams now start to embrace an increasing number of automation tools to provide AI-driven (or at least AI-assisted) coding functions in their codebases, a Newtonian equal and opposite reaction is also surfacing in the shape of governance controls and guardrails to keep AI injections in check as these technologies now surface in the software supply chain. What happens next?
Among the firms delivering tools in this space is JFrog. As detailed here on Techzine, JFrog’s new Shadow AI Detection helps automatically detect and create an inventory of all internal AI models and external API gateways used across the organisation to access data from either approved or ad-hoc third-party sources.
Other toolsets on offer in this space include services from Nightfall. The company’s Data Loss Prevention (DLP) platform is specifically engineered and built to detect and prevent sensitive data repositories (from personally identifiable information, PII and onwards) from leaking into unauthorised “shadow AI” tools via prompts, file uploads and copy/paste actions.
Wake up to Nightfall
“Unlike legacy DLP, Nightfall’s classifiers are explainable and adaptable. Each detection includes confidence scoring and justification metadata, so teams understand why a file was flagged and can fine-tune policies to balance protection with productivity,” said Rohan Sathe, CEO and co-founder of Nightfall. “With prebuilt protection for common document types and custom detectors for unique business assets, organisations gain both immediate value and long-term flexibility in a single platform that works across SaaS apps, endpoints and communication channels.”
Vocal in this space and always keen to cement its position as a platform for security at every conceivable level is Palo Alto Networks. The company’s Prisma SASE and Prisma Access services are designed to represent SaaS security functions that enable visibility into generative AI applications that span approved and unapproved status.
Essentially setting out to control data flows, Palo Alto Networks AI Access Security (part of Prisma SASE): specifically designed to monitor and control shadow GenAI apps, with a catalogue of thousands of generative AI apps, risk classification and policy controls.
Model & data as attack vectors
“The security surface extends far beyond traditional concerns. For AI systems, the model and data become the primary attack vectors,” said Meerah Rajavel, chief information officer at Palo Alto Networks, on the company’s own blog. “While frontier models from providers like Google and OpenAI carry lower risk due to extensive testing, most AI applications incorporate multiple specialised models.”
As a working example, Rajavel says that we might think about data jobs that involve parsing long documents with tables and images. This works with large language models like Gemini (but it’s comparatively slow and expensive) and a specialised small language model does this single task in subseconds at lower cost… but with a third-party model repository that creates new supply chain risks.
“Organisations must scan models for vulnerabilities, manage permissions appropriately and protect data access. Runtime security becomes critical because prompts function like code and the LLM acts as an operating system. That has to be protected like a software supply chain,” said Rajavel.
Wider market options
Shadow AI detection and control is a growing marketplace. Other vendors that operate here include Netskope with its Netskope One platform, which includes AI security capabilities to detect shadow AI usage. Not exactly a like-for-like competitor but still in the same core operational arena, the SaaS management toolset from Zylo is built to discover and manage all their SaaS applications, including unauthorised AI tools, by centralising data, risk scores and usage.
“To address the risk [of shadow AI], CIOs should define clear enterprise-wide policies for AI tool usage, conduct regular audits for shadow AI activity and incorporate GenAI risk evaluation into their SaaS assessment processes,” said Arun Chandrasekaran at magical analyst house Gartner.
Chandrasekaran reminds us that, typically, enterprise organisations and their software engineering teams are excited about AI’s speed of delivery. However, he says, the punitively high cost of maintaining, fixing or replacing AI-generated artefacts such as code, content and design can erode promised return on investments.
“By establishing clear standards for reviewing and documenting AI-generated assets and tracking technical debt metrics in IT dashboards, enterprises can take proactive steps to prevent costly disruptions,” said Chandrasekaran.
Coming out of the shadows
Straddling this market with functions designed to shed light on the shadows is BetterCloud, with its SaaS management platform that reduces security vulnerabilities, now with an increased focus on AI. Skyhigh Security (surely not a great name for a company, but apparently formerly part of McAfee CASB) exists to drive multi-cloud cloud discovery, policy enforcement across DLP and shadow AI concerns. Let’s also mention Reco, a company with a dedicated platform aligned to uncover shadow AI and help structure AI governance… with a promise to “instantly track” AI agents and their “data access patterns” within any given cloud instance. Reco may be one to watch.
Harmonic Security, Cyberhaven and Lasso Security also feature in this market… and it’s a growing space. As concerns surrounding shadow AI’s presence centralise around data exposure, failure to maintain governance and compliance, hallucinations and the wider need to eradicate malicious models and agents, 2026 should provide even greater illumination in this part of the AI business.
Free image (main) Wikipedia Commons.
