AI tools are spreading through the enterprise ecosystem at an astonishing pace. According to McKinsey, AI adoption shot up between 2017, when 20% of businesses regularly used AI for at least one business function, and 2025 when 88% said the same.
AI delivers on many benefits, ranging from increased efficiency and saved time to fewer errors and improved revenue. Unfortunately, it brings a number of risks as well. AI fundamentally expands the enterprise attack surface. Every model, prompt, plugin, API connection, training dataset, and downstream dependency can introduce a new potential point of compromise, demanding a far more robust and continuous security posture than traditional SaaS governance models were built to provide.
A BCG survey found that 60% of organizations have experienced an AI-powered cyberattack in the past year. And new research that my company recently published indicates that security teams aren’t keeping ahead of AI dangers. Our 2026 report found that 66% of CISOs are using GRC tools which aren’t fit for the AI-permeated supply chain, as they aren’t designed for ongoing oversight into Nth-party risk exposure.
What’s more, only 22% of CISOs have adapted onboarding processes for evaluating AI vendors specifically. And yet, 60% of CISOs recognize that AI vendors represent a new set of risks.
It’s critical for enterprises to rethink their approach to cyber governance to ensure that they are fit for use in an AI-powered world.
What works for SaaS governance doesn’t necessarily apply
Most CISOs have extensive systems of solutions, policies, and procedures for managing SaaS providers and software supply chain partners. All too often, however, they copy and paste these for AI vendors and tools.
Data that’s fed into AI tools has the potential to be exposed to a much wider audience. Many large language models (LLMs) retain prompt data and use it for ongoing model training. Unlike on-prem tools and most SaaS solutions, once the data is entered into an AI solution, it’s outside of the company’s control.
Explainability and trust are big concerns as well. Hallucinations are a well-known issue, and the black box nature of many AI solutions makes their output hard to verify. AI tools can generate “confident lies” based on what it considers to be probabilistically likely, pushing falsehoods into business decision-making.
Against this background, it’s worrying to hear that 52% of CISOs use the same general onboarding processes for AI vendors and tools that they use for any other third party entity. But it’s also understandable, given the circumstances.
Today enterprises are facing unprecedented internal pressure to adopt AI tools at speed. Business units are demanding AI solutions to remain competitive, drive efficiency, and innovate faster. But existing cyber governance and third-party risk management processes were never designed to operate at this pace. As a result, security teams are becoming an unintended bottleneck, with vendor vetting cycles that are too slow, approval processes that are too manual, and risk assessments that are too rigid to keep up with the velocity of AI demand.
Without modernized cyber governance and AI-ready risk management capabilities, organizations are forced to choose between speed and safety. To truly enable the business, governance frameworks must evolve to match the speed, scale, and dynamism of AI adoption – transforming security from a gatekeeper into a business enabler.
Existing cyber governance has many holes
The truth is that existing GRC programs are far from completely effective, efficient, and fit for purpose. Most of them focus primarily on direct vendors, with just 41% monitoring fourth parties for cyber risk and only 13% covering Nth parties. Only 15% of CISOs agree that they are monitoring all their third, fourth, and Nth-party vendors.
What’s more, compliance doesn’t guarantee security. DORA, NIS2, and other regulatory frameworks set only minimum requirements and rely on reporting at specific points in time. While these reports are accurate when submitted, they capture only a snapshot of the organization’s security posture, so gaps such as human errors, legacy system weaknesses, or risks from fourth- and Nth-party vendors can still emerge afterward. What’s more, human weakness is always present, and legacy systems can fail at crucial moments.
Even a company that’s fully compliant can still be exposed to AI risks, and 78% of CISOs admitted that they are still striving for full compliance with third-party cyber risk regulations.
This opacity around cyber governance has real world consequences. According to the report, 60% of CISOs have seen an increase in third-party cyber incidents in the past year. Only half of those originated from third parties, with the other half caused by more distant partners in the supply chain.
AI tools offer a cautious hope
At least two-thirds of CISOs feel that they don’t have the tools they need to close the gaps in their cyber governance, even though GRC platform adoption is high. It’s a statistic that highlights how ineffective traditional tools are for managing AI risks.
In response, CISOs are increasingly adopting AI solutions to monitor their AI solutions. Fully 99% of those surveyed have either already implemented AI-based vendor risk assessment solutions, or intend to do so.
By automating vendor access monitoring and setting tireless AI-powered tools to track dependencies and interactions, security teams can shine a light into the AI ecosystem.
That said, AI security tools shouldn’t be seen as a silver bullet or left to operate independently. AI systems must be purposefully built to produce verifiable, trustworthy outcomes – otherwise, they risk hallucinations that can lead to inaccurate assessments of an organization’s security posture. Additionally, AI regulations are still evolving, so a solution that meets today’s requirements might not match tomorrow’s AI risks.
Strategies that work to limit AI risks
While there’s no magic wand, there are tried-and-tested approaches that resolve and mitigate the risks of AI vendors and solutions. Mapping the flow of data around the organization helps reveal how it’s used and resolve blind spots.
Requiring AI tools to include references for their outputs ensures that risk decisions are trustworthy and reliable. Governance decisions need to be validated, so humans should stay in the loop to watch for AI hallucinations and hubris.
Continuous, round-the-clock monitoring should replace one-off vendor intake questionnaires. Companies gain a real time view of dependencies across the dynamic supply chain and ecosystem, allowing security teams to close off potential security gaps caused by partners forgetting to disclose downstream entities. With this level of insight, they can move risk management to be proactive rather than reactive.
AI solutions can be managed safely
CISOs are at a critical juncture. AI solutions play a vital role across many business functions, so security teams need to do everything in their power to make sure AI is adopted, managed and used safely. Now is the time to establish robust cyber governance policies, including dynamic onboarding and automated monitoring, that are suitable for managing the risks of AI tools.
As CEO and co-founder of Panorays, Matan Or-El leads an AI-enhanced cybersecurity platform that gives organizations visibility into supplier and partner risk. A serial entrepreneur with deep expertise in third-party security, he has driven Panorays’ growth since 2018.