4 min Applications

JFrog makes big splash on Nvidia

JFrog makes big splash on Nvidia

Software supply chain technology company JFrog has this month joined the many firms snuggling up to capitalisation-focused GPU firm Nvidia. The JFrog Software Supply Chain Platform is now enjoying a new level of integration with Nvidia NIM microservices, which form part of the upper case-centric company’s Nvidia AI platform. Now styling itself as the “liquid software company” (Ed: surely it should be land & liquid to align with its amphibian nature?), JFrog wants to convey is capability for composable yet unified infrastructure technologies that are capable of working across complex new AI service structures.

Straddling its froggy legs outward across both secure DevSecOps and MLOps competencies, JFrog insists its Nvidia NIM integration will enable deployment of GPU-optimised, pre-approved machine learning models and large language models (LLMs).

“The demand for secure and efficient AI implementations continues to rise, with many businesses aiming to expand their AI strategies in 2025. However, AI deployments often struggle to reach production due to significant security challenges,” said Gal Marder, chief strategy officer at JFrog. “AI-powered applications are inherently complex to secure, deploy and manage – [and] concerns around the security of open source AI models and platforms continue to grow.”

The ‘scaling’ makes AI tough

Marder suggests that his firm’s work with Nvidia will enable easy-to-deploy software products that enable companies to deliver of their AI/ML models. He says that data scientists and ML engineers face significant challenges when attempting to scale their enterprise ML model deployments. 

But why should scale be so challenging?

Because, says JFrog, the complexities of integrating AI workflows with existing software development processes – coupled with fragmented asset management, security vulnerabilities and compliance issues – can lead to lengthy, costly deployment cycles and failed AI initiatives. 

The JFrog integration with Nvidia NIM enables enterprises to deploy and manage foundational LLMs – including Meta’s Llama 3 and Mistral AI – while maintaining enterprise-grade security and governance controls throughout their software supply chain. JFrog Artifactory – the heart of the JFrog Platform – provides a tool for hosting and managing software artifacts, binaries, packages, ML Models, LLMs, container images and components throughout the software development lifecycle. 

What is Nvidia NIM?

TECHNICAL NOTE: Nvidia NIM (Nvidia Inference Microservices) is a collection of microservices that helps developers deploy AI models across clouds, data centers, and workstations. It’s part of the Nvidia AI Enterprise software platform. 

According to IDC, by 2028, 65% of organisations will use DevOps tools that combine MLOps, LLMOps, DataOps, CloudOps, and DevOps capabilities to optimize the route to AI value in software delivery processes.  

“The rise of open source MLOps platforms has made AI more accessible to developers of all skill levels to quickly build amazing AI applications, but this process needs to be done securely and in compliance with today’s quickly evolving government regulations,” said Jim Mercer, IDC’s programme vice president for software development across DevOps & DevSecOps. “As enterprises scale their generative AI deployments, having a central repository of pre-approved, fully compliant, performance-optimized models developers can choose from and quickly deploy while maintaining high levels of visibility, traceability, and control through the use of existing DevSecOps workflows is compelling.” 

By integrating NVIDIA NIM into the JFrog Platform software application developer can access Nvidia NGC (Nvidia GPU Cloud) as a hub for GPU-optimised deep learning, ML and high-performance computing (HPC) models, providing customers with a single source for software models, software and tools  \while leveraging enterprise DevSecOps best practices to gain visibility, governance, and control across their software supply chain.  

Feeling froggy yet?

So JFrog connected more, integrated more, provisioned (for scale, scope and solution span) more and generally became one step more meaty spicy bullfrog than whatever the name for a small frog is (Ed: froglet, you fool) with its Nvidia alignment here. 

What it really does for live production teams is enable them to version, secure and deploy AI models using the same software development workflows (in this case, ones that come with JFrog DevSecOps DNA) that they already know and trust… and that’s what enterprise software application development automation efficiency is all about i.e. the ability to use previously codfified, standardised and ratified technologies in more than one place, safely. 

If that doesn’t get you interested in a plate of roasted or fried frog’s legs, then nothing will… plus anyway, they taste like chicken.