7 min Applications

JFrog: How to leap along the AI workflow tightrope

JFrog: How to leap along the AI workflow tightrope

The pace of software development has never been faster. It is currently fuelled in no small part by the race to apply agentic functions below the surface in the developer’s codebase, and in upper levels of functionality at the user interface. With new AI models, data pipelines and automation tools to juggle, we have created something of an AI workflow tightrope. So what happens next?

The questions we need to ask are: how do we traverse this new essential link to more intelligent applications without falling off, or, to put the question in more technical terms, crucially, with growing tension now surfacing on the AI workflow tightrope, how do developers move fast without sacrificing trust?

Using AI to accelerate delivery might help meet a deadline, but it can just as easily compromise reliability, introduce vulnerabilities, or enable misuse. Problems that are much harder to fix once systems are in production. That’s the tightrope modern software and AI teams are walking; speed is essential, but trust is non-negotiable.

The shift in developer & AI workflows

Paul Davis, field CISO at JFrog thinks he has some valuable answers to share in this space. He reminds us that over the last decade, the way we build software has changed drastically.

Developers and data scientists are now expected to take responsibility for quality, security and outcomes across the lifecycle, rather than handing projects to a separate team for testing and validation. At the same time, platform engineering and ML practices have given teams more autonomy, letting them self-serve the tools they need to build, train, and deploy efficiently,” explained Davis.

He notes that this autonomy also increases exposure, particularly with the rise of ‘shadow AI’, where AI tools and models are deployed without IT oversight. This highlights how easily unmanaged AI can bypass governance and introduce unmonitored risks across data, code, and infrastructure. Even teams just beginning to explore AI-driven development face pressure to move quickly, sometimes without the appropriate guardrails.

“You can see this most clearly in AI pipelines, where models are retained, validated and redeployed almost continuously. The temptation is always to move fast, but doing so without safeguards can have serious consequences,” said Davis.

When speed undermines trust

The Jfrog team say that all these factors are illustrated by the fact that recent high-profile incidents in the software industry show just how damaging it can be to prioritise speed over trust. In one case, a global cybersecurity company pushed out a faulty content update that crashed millions of machines worldwide, wiping billions from its market value and triggering lawsuits. In another, rushed and untested updates from a consumer tech brand left customers with broken devices and lost data, ultimately costing the CEO their job and knocking half a billion dollars from the firm’s evaluation.

“The same risks apply to AI. A compromised model can be backdoored to behave maliciously; poisoned training data can subtly manipulate predictions; adversarial inputs can cause critical misclassifications; and even common open-source model formats can be abused to execute arbitrary code when loaded. Once confidence is lost, it takes years and significant investment to rebuild,” clarified Davis.

The software (and AI) factory

Having worked in this arena extensively, Davis recommends that it helps to think of software development like a factory.

  • Developers and data scientists are on the production line, building features and training models.
  • Platform engineering, DevOps and ML teams act as factory managers, ensuring the process runs smoothly. CISOs, CIOs and governance leaders serve as regulators, setting the standards and ensuring compliance.

“No single group can guarantee trust on its own. If developers move too quickly without quality checks, if managers fail to provide guardrails, or if regulators lack visibility into compliance, the entire system falters. In AI workflows, this breakdown might mean no one can say which dataset trained a model, who approved its deployment, or how its outputs are being monitored in production,” specified Davis.

The challenges of AI-driven workflows

But balancing speed and trust is much easier said than done. Organisations struggle to manage ownership at scale. With thousands of applications, services, and models in motion, tracking responsibility quickly becomes a manual and error-prone process, and Shadow AI accelerates this problem by introducing AI assets without visibility, such as MCP servers, leaving gaps in oversight and increasing the chances of unmanaged risk.

Thinking about what happens inside organisations today, Davis points to the way security vulnerabilities across the model lifecycle further complicate matters. He notes that model backdoors, malicious datasets, adversarial attacks and vulnerable dependencies all pose serious risks – especially with the widespread use of open-source models and public registries where compromised artefacts can be easily introduced. These threats can silently manipulate predictions, influence automated decisions, or enable attackers to execute arbitrary code.

“Implementation weaknesses add another layer of exposure. Weak authentication, immature ML tooling, and misconfigured containers can give attackers access not only to the model itself, but also to surrounding systems and data. Since many AI workloads rely on shared infrastructure, a single misconfiguration can escalate into a much broader breach,” said Davis.

Compliance and traceability add another layer of complexity. Policies are meaningless without proof, and for AI specifically, organisations must be able to demonstrate which dataset trained a model, which validation tests it passed, and how its behaviour changed over time. Too often, that evidence is gathered manually through screenshots or reports, which are slow, unreliable, and impossible to scale.

All of this is compounded by metadata fragmentation. Information scattered across notebooks, pipelines, registries, cloud platforms, and container systems makes it difficult to piece together a coherent picture of how a model was developed and deployed. During an incident, even answering basic questions such as who trained this model? … or, which dataset was used? … both exist and can take days, time organisations cannot afford to lose.

Building trust without slowing down

“Despite these challenges, it is possible for organisations to strike a balance between speed and reliability. The key is to treat AI and software development as part of an integrated system of record, one that unifies ownership, security controls, and compliance evidence across the entire lifecycle. This includes dependency and artefact validation to ensure models, datasets, and containers originate from trusted sources and are scanned for malicious behaviour, eliminating one of the most common vectors for compromise,” said Davis.

Automating evidence collection and using digitally signed, tamper-proof records replaces the unreliable manual reporting that slows teams down and creates inconsistency. Equally important is embedding both proactive and reactive security controls; blocking unsafe models before release, while continuously monitoring deployed models for drift, anomalies, or suspicious behaviour that may signal compromise or degradation.

Finally, ensuring integrity throughout the pipeline requires artefact signing and promotion gating, so every component that reaches production is exactly the one that was approved, with no opportunity for tampering along the way. Most importantly, collaboration has become the norm.

“Developers, data scientists, platform engineers and security teams all share responsibility for maintaining both speed and trust, and when each group operates with shared visibility and alignment, organisations can innovate rapidly without compromising reliability or safety,” he said.

Shipping with trust & velocity

Davis concludes by saying that today’s AI and software pipelines demand speed, but not at the expense of trust. As AI becomes part of the world’s critical infrastructure, the cost of getting things wrong increases dramatically.

The organisations that thrive will be those that treat trust as an asset, building it into every stage of development just as deliberately as they build new features or models. Balancing speed and trust isn’t just an engineering challenge anymore; it is a business imperative.