Qualys combats AI model theft to de-risk machine intelligence

Qualys combats AI model theft to de-risk machine intelligence

Qualys wants to be the AI de-risk platform company of choice. Actually, it doesn’t i.e. it still wants to be the (positively) disruptive cloud-based IT, security and compliance solutions company. But within that broad brush corporate definition, the organisation wants to be the platform that enterprises go to when they say: great, we’ve now got AI, but wait, what AI assets and components are we using in the business and how robust, secure, compliance and unbiased are they? The straightforwardly named Qualys TotalAI enables holistic discovery and vulnerability assessment of AI workloads to detect data leaks, injection issues and model theft.

The technology is designed to address the challenges and risks associated with securing generative AI and large language model (LLM) applications. 

An expanded attack surface

As organizations increasingly integrate AI and LLMs into their products and solutions, Qualys says that these firms face an ‘expanded attack surface’ and heightened cyber risks. 

But why is this so?

If there is an expanded attack surface that results from the use of generative AI and LLMs, it may be because these new forms of intelligence are being tasked with making connections to data resources outside of approved business technology guidelines i.e. the AI models and engines themselves are making decisions faster than IT management can stipulate any reasonable level of guidance and control. Perhaps even more likely is the fact that unknown (or unapproved) LLMs or AI models (known as shadow models) increase exposure to threats, including model theft and data leaks from existing common vulnerabilities & exposures (CVEs) or misconfigurations. Additionally, there is a rising risk of accidental data loss, compliance issues and reputational damage due to inappropriate content and AI hallucinations generated by these models. 

What is AI model theft?

For completeness here, AI model theft is an occurrence when an adversary (or possibly an ex-employee that has retained login details) is able to make a duplicate of a machine learning model without actually requiring approved direct access to the model’s parameters or data. 

As succinctly defined here on Securing AI by Marin Ivezic and Luka Ivezic, “Model stealing, also known as model extraction, is the practice of reverse engineering a machine learning model owned by a third party without explicit authorization. Attackers don’t need direct access to the model’s parameters or training data to accomplish this. Instead, they often interact with the model via its API or any public interface, making queries (i.e. sending input data) and receiving predictions (i.e. output data). By systematically making numerous queries and studying the outputs, attackers can build a new model that closely approximates the target model’s behaviour.”

In answer to this reality then, Qualys TotalAI expands Qualys’ asset visibility, vulnerability detection and remediation capabilities to generative AI and adds LLM scanning. The solution specifically addresses the OWASP Top 10 most critical risks for LLM applications: prompt injection, sensitive information disclosure and model theft. With Qualys TotalAI, organizations can use AI while upholding rigorous security standards.

“We’re only beginning to scratch the surface of AI and LLM’s potential for driving value for enterprises. At the same time, we need to secure this burgeoning journey, so it doesn’t add new risk to the business,” said Sumedh Thakar, president and CEO of Qualys. “At Qualys, we are committed to helping our customers stay ahead of emerging cybersecurity risk, and with Qualys TotalAI, enterprises can focus on growth and innovation, knowing they will stay protected from the most critical AI threats.” 

Qualys TotalAI will allow organisations to discover all AI workloads and classify all AI and LLM assets, including GPUs, software, packages and models, in production and development while correlating their exposure with the attack surface.

650+ AI-specific detections

Thakar says that his firm’s technology helps prevent model theft and extends the power of the company’s Qualys TruRisk technology to assess, prioritise and remediate AI software vulnerabilities with 650+ AI-specific detections, correlated with threat feeds and asset exposures, to prevent the risk of model and data theft.

Users can get hold of comprehensive remediation capabilities to exceed security requirements, align with SLAs and meet business needs. Proactively mitigate potential threats to ensure seamless operations and a strong AI and LLM security posture. AI engineers can also assess LLMs for critical attack exposures like prompt injection, sensitive information disclosure as well as the aforementioned model theft per the OWASP Top 10 for LLMs.

Our AI went wrong, now what?

Looking at the real world implementation surface of AI today, Qualys is clearly attempting to help organisations get past the ‘AI is here, what do we do when it goes wrong?’ conversation, which many businesses will inevitably have some form of in the immediate future. Knowing how your firm’s treasured AI engine is being potentially reverse-engineered through model theft (and what the IT team is doing to protect against this occurrence) is clearly part of the responsibility that organisations need to take as they now graft complex and powerful machine learning-powered applications and data services onto the operational layers that they do business on.

Also read: Qualys TotalCloud 2.0 is first CNAPP to extend protection to SaaS apps