4 min Applications

JFrog calls for leg-up on MLOps flops

Insight: Analytics

JFrog calls for leg-up on MLOps flops

Software supply chain technology company JFrog has this month highlighted the disparities between MLOps and security in working IT departments, especially those that claim to have closely forged connections to the business function. With MLOps (the practice of refining sysadmin, database administration and other supporting functions for machine learning) now feeding AI models as such an essential part of the software supply chain, the company suggests that any perception of connectedness between leadership and frontline teams is increasingly fragile and could lead to the risks. We are now, it seems, in an era then when software robustness is not just a measure of an individual app, an individual API or connection point, an individual microcomponent or service… it’s now a question of an attack surface that spans the entire software supply chain, so how should we go forward?

Software supply chain security breaches are said to be experiencing a significant uptick, according to a June 2023 IDC survey showing what is estimated to be a 241% increase in attacks year-over-year. Strangely and surprisingly, JFrog’s own report into this area says that only 30% of the survey respondents identified the need to address vulnerabilities in their software supply chain as a top security concern.

Disconnects & dislodged disparities

What is the actual nature of these disconnects and disparities then? JFrog suggests that there are disagreements and dislodged connections between security executives and frontline software teams concerning malicious open source package detection, AI/ML integration and code-level security scans.

In more detail, this means many business executives think their organisations possess tools to detect malicious open-source packages, while only 70% of developers agree with this statement. The commercial business function somehow seems to be under the impression that they are using ML models in their software applications, but in reality, this is only the case in around half of live production environments.

The business function also appears to think that AI/ML tools are being used for security scanning and remediation processes, but the number of DevSecOps tools and teams actually working at this level is modest. Many executives believe code-level security scans are conducted regularly, but developers confirm that this is not always true.

“The complexity of today’s software supply chain poses unprecedented risks. Despite leadership efforts to equip frontline teams with the right equipment, developers are struggling to improve efficiency and accelerate productivity due to tool sprawl, lengthy open source and ML model approvals, plus audit and compliance checks,” said Moran Ashkenazi, SVP & CISO, JFrog. “This discrepancy highlights the urgency for organisations to rethink their security strategies, focus more on AI/ML components and align executives and doers on a mission to fortify their software supply chains.”

JFrog’s study also delves into regional disparities in software supply chain security, visibility and use of AI/ML such as the number of respondents who were unaware of tools for identifying malicious open source packages, in contrast to lower rates in the US (9%) and Asia (1%), highlighting a substantial disconnect in EMEA’s security strategies and operational understanding.

Europe’s risk-averse environment

“Only 82% of EMEA respondents reported using AI/ML models, compared to 91% in the US and 99% in Asia. This variance may point to Europe’s risk-averse environment influenced by strict regulations, while we see faster adoption of AI/ML technologies in the US,” noted Ashkenazi and team.

The question then arises, if any of this discussion holds water – and let’s remember that the bread of this sandwich is a DevSecOps platform company telling us that developers and security teams need to think about how their operations function should dovetail together and also form more commercially aware links to the business function… and that’s what DevSecOps firms love to talk about – how have these perceptions arisen and what should we do about it?

It may well be that the hype and hyperbole that the generative AI movement has brought with it is so intense that businesspeople think ‘okay, well, my enterprise apps must have AI inside now’ when in fact the whole process of rolling out new intelligence functions is obviously slower, more considered and no quick plug-and-play affair. Whatever the reason, the supporting and ancillary toolset vendors that seek to fix these issues throughout the software supply chain are keen to tell us how much we need to start thinking bigger picture.

As for JFrog, perhaps it’s now jumping for joy if its very existence makes it dish of the day?