AI is changing. The market for artificial intelligence is spiralling through various stages of adolescence and (sometimes surprisingly rapid) maturity. This tumultuous period of growth sees the IT industry focused on AI issues straddling everything from shadow AI to GPU hoarding to infrastructure overprovisioning to hallucinations, bias and so on. With foundation model training, inference and reasoning engines taking up many column inches, we’re also concerned about the span now developing between large and small language models (medium models do exist, but they make fewer headlines) and we haven’t even mentioned energy costs, compliance and real-time AI compute analysis at the Internet of Things edge.
Given this backdrop, what do we need to think about the shape of AI in 2026? This is part one of a two-part feature series and part two is linked here.
Keen to explain where AI really will start to reshape next, Techzine started off its analysis of the state of the nation in this space by speaking to Vaibs Kumar, senior vice president of technology at IFS.
“Let’s think about where we are today and how we got here,” advised Kumar, speaking at the IFS Industrial X Unleashed 2025 conference in New York this winter. “Almost all the progress over the last 18 months is underpinned by allowing an LLM to think at inference time (aka test time compute). This approach gives the model the time to compute and work through its reasoning processes as it scans over the ‘mean’ answer values it can deliver. Key research today is focused on how to get LLMs to maintain appropriate context over long-running tasks and outcomes. That could lead to self-learning and adaptation – something to watch out for in 2026.”
Kumar talks about context collapse as a strong theme going forward. For a lot of real world problems that require long-running thinking, planning and reasoning models suffer from so-called ‘context collapse’ because LLMs have a defined context window inside which they operate. For completeness here, we can define context collapse as the ‘phenomenon’ experienced when the loss of important information from an LLM means earlier parts of a long conversation or document fall outside the model’s effective context window, causing it to forget or misinterpret prior details. It leads to degraded coherence, incorrect assumptions, or repetition because the model no longer has access to the full dialogue history it was relying on.
“What this means for the future is that we’ll all get fewer non-deterministic outcomes when building agentic AI services and, importantly, if software engineering teams are building mission-critical life-critical systems to support industrial AI deployments, then they have the power to code appropriately,” explained IFS’ Kumar.
Age of autonomous agents
Emanuela Zaccone, AI for cybersecurity product manager at Sysdig says that in 2026, AI will be shaped less by the size of its models and more by the rise of autonomous agents. As organisations move from agentic AI experimentation to real deployment, Zaccone says the debate will shift away from “large versus small models” toward a more fundamental question: which problems truly require autonomous, reasoning-driven systems and which remain well served by generative AI?
“The real race will be about purpose, measurable outcomes and return on investment. AI is no longer simply a technical challenge, it has become a business strategy,” said Zaccone. “However, this evolution comes with new risks. As agentic systems gain autonomy, securing the underlying AI infrastructure becomes critical. Standards are still emerging, but adopting strong security and governance practices early dramatically increases the likelihood of success. At the same time, AI is reshaping the risk landscape faster than regulation can adapt, which means it’s raising pressing questions around data sovereignty, compliance and access to AI-generated data across jurisdictions.”
All of which allows us to say then, AI in 2026 will not just be more capable, it will need to be secure, governed and sustainable by design.
Optimising architectures & training pipelines
Ishraq Khan, CEO and founder of Kodezi Inc. an AI model built specifically for code reminds us that in 2025, AI has moved from simple API experiments to serious engineering work. The focus is no longer on what the model can do, but on how to deploy, fine-tune and govern it at scale.
“Many teams now face practical limits around data quality, compute efficiency and responsible integration with existing systems. There is a clear gap between those who just wrap APIs around foundation models and those who actually optimise architectures and training pipelines. The next phase of AI is about reliability, interpretability and building systems that engineers can trust and improve over time,” he said.
Khan says that the state of AI in 2026 feels like a crossroads i.e. there are a lot of “AI tourists” who think they are AI natives, experimenting with tools without truly understanding the systems behind them.
“We are building and adapting quickly, but real progress comes from those who combine technical depth with responsible design and execution. AI is becoming more embedded in how we create, decide and operate. The challenge now is to separate surface-level adoption from meaningful transformation and to ensure that AI development continues to serve people, not just impress them,” he added.
AI codifies developer flow state
Shannon Mason, chief Strategy officer at Tempo Software says that AI will make flow state a feature, not a fluke, for developers.
“In the next few years, AI will fundamentally reshape the software development process not by replacing engineers, but by elevating their time toward the highest-value cognitive work. As AI takes on more of the tedious, repetitive and context-switch-heavy tasks, developers will reach ‘flow states’ more reliably and spend more of their day solving the problems that actually require human judgment. Rather than maximising utilisation, organisations will start measuring the quality of decision-making and the velocity of meaningful outcomes – metrics that improve as AI absorbs low-value work. AI-driven insights will also bring a new level of fidelity to planning, helping teams project effort, risk and capacity with far greater accuracy than human intuition allows,” said Mason.
She thinks that this shift will finally close the gap between optimistic plans and achievable delivery, enabling teams to make smarter tradeoffs earlier. The companies that benefit most won’t be the ones who automate the most tasks, but the ones who redesign their development systems to amplify human creativity, focus and strategic clarity.
“With new AI-native browsing experiences like OpenAI’s Atlas launching, it is clear the AI prompt will become the new browser. We won’t be interacting with services through websites in the near future. We’ll just ask for a prompt and the agents will do all the work. With 70 percent of enterprises expecting full-scale adoption of AI agents within the next three years, this future is arriving faster than most realise. This has massive implications for most internet business models, which rely on traffic and engagement for revenue,” said Marco Palladino, CTO at Kong.
Drop the pilot
Jakob Freund, CEO and co-founder at Camunda says that for all the progress and hype surrounding AI, many agentic AI initiatives have hit a ceiling, with most pilots failing to scale beyond isolated, task-level use cases.
“To close the gap between the vision and reality of agentic AI over the next 12 months, enterprise agentic automation (EAA) will be essential. By blending dynamic AI with determinist guardrails and human-in-the-loop checkpoints, EAA empowers enterprises to automate complex, exception-heavy or cognitive work without losing control,” explained Freund.
“EAA combines policy and technology to design, govern and orchestrate agents, humans and systems across end-to-end processes to deliver trusted AI autonomy at scale. Role-based permission and human-in-the-loop checks provide greater control over outputs, striking the right balance between agent autonomy and oversight. With pressure mounting to prove real returns on AI investments, EAA will be essential to reduce risk and technical debt while unlocking meaningful business outcomes and real ROI,” he added.
Claire Keelan, managing director for UK at IT infrastructure design and operational support company Onnec says that with Europe home to a large portfolio of legacy datacentres, next year operators must prove how fast they can innovate to stay ahead in the new AI landscape.
“In 2026, we’ll see a surge in retrofitted datacentres as operators rush to upgrade legacy sites to meet soaring AI demand,” suggested Keelan. “Power and cooling will be complex but cabling and network capacity will be the real bottlenecks. Poor-quality or overcrowded cabling limits density, throttles performance and makes future upgrades almost impossible. Smart operators will invest early in high-grade structured systems that support modular expansion and long-term flexibility.
She says that this means that “retrofit-ready” will become the new benchmark for responsible, future-proof design.
Forget creation, embrace validation
Magnus Tagtstrom, corporate VP AI Transformations and GM Europe at Iterate.ai thinks that one of the bigger AI stories in 2026 is going to be how to manage velocity. He says that this comes about because with the shift from human-speed development to AI-speed generation, the bottleneck has moved from creating code to validating it.
“That shift creates what I call velocity-driven risk, where a single AI-generated flaw can now ripple across microservices and pipelines before anyone has time to review it properly,” said Tagtstrom. “So the question becomes whether (and how well) we can build governance that moves as fast as the AI itself. You’ll see AI agents validating and securing systems alongside the AIs that build them, while regulators will demand proof that every generated component has passed automated checks and human oversight.”
He says that the organisations that win in the coming year will be the ones that treat every AI assistant as a powerful new team member whose work is continuously verified, not a glorified autocomplete feature.
“Over the next 12 months, AI-supply chain breaches will escalate exponentially as threat actors aggressively search for and find ‘trusted’ AI plug-ins and AI connectors that are vulnerable to attack,” said Alex Feick, vice president of eSentire Labs. “Within the next twelve months, we will see the first widely disclosed cross-plugin breach, driven by blind trust between an enterprise LLM and the internal plugins it can invoke. The failure mode is simple: an AI model confuses an outside information request for an inside instruction and there is no guardrail at the boundary to prevent sensitive corporate data from being leaked.”
Feick further states that legal action will emerge from the inability to attribute leaked, sensitive data from an AI model’s base training data or subsequent custom training.
“In the next 12 months, we will likely see lawsuits between the developers of base AI models and businesses, which have purchased a base AI model and custom-trained the model. The EU’s Product Liability Directive suggests if a company substantially customises a foundation AI model, it could be deemed as a ‘manufacturer’ of a new AI product and be liable for defects. This implies that if sensitive data memorised by a model leaks out, the company that fine-tunes the model might be viewed as a producer of a defective product and therefore would be held responsible,” said Feick.
Downstream delights
Peter van der Putten, director of the AI Lab at Pegasystems and assistant professor, Leiden University projects that the real money and action for AI in 2026 will be in the application layer, the layer of downstream tasks where AI is applied and where the value is created – or lost.
“This is where AI gets embedded into the often unglamorous reality of business strategy, enterprise architecture and workflows. Enter the ‘advent’ of agentic AI,” said van der Putten. “But simply unleashing herds of agents won’t solve your problems in 2026. At best, they will be ineffective; at worst, they will turn into an angry mob. So the successful agentic use cases will need to be robust, predictable and reliable. They’ll focus on design-time uses, such as the ideation or design of applications, with business problem owners in the loop. Or for run-time cases, success will come from combining the creativity of agents with the predictability and repeatability of reliable data and agentic tools, such as workflows and business rules.”
He says that these agents must operate in the confinement of a case, where the required data, context and state is provided, but only within the boundary of what they are allowed to access and do. And finally, for him, there must be top-to-bottom transparency and explainability for different audiences.
Shahid Ahmed, global head of edge services at NTT DATA says that deploying AI on the factory floor isn’t as simple as adding a large language model.
“Real-world environments demand real-time decision-making and localised processing. This is where physical AI comes in. These fundamental, small AI models interact and interpret physical assets like sensors and machinery, while edge AI processes this data on or near the device, reducing latency and bandwidth demands,” explained Ahmed. “Factories produce vast amounts of data from numerous operational technology (OT) devices and sensors monitoring, everything from temperature gauges to production line machinery. Yet only a small percentage of this data is relevant for immediate decisions. Instead of relying on factory workers to sift through thousands of data points, edge AI and SLMs can surface key insights enabling faster, more targeted responses. Data that was once idle now generates insights, reshaping industries and operations one factory at a time.”
Network modernisation, now please
Kelly Burroughs, director of strategy and market development at iBwave Solutions says that a key AI trend is surfacing because enterprises are investing in the wireless infrastructure needed to support AI, automation and data-intensive operations. From her perspective, this means that network modernisation is no longer an abstract roadmap item; it is a near-term requirement.
“At the same time, advanced use cases are setting a new performance benchmark for networks,” said Burroughs. “Rising uplink demand and constant mobility mean designers must think about how to maximise success across indoor and outdoor environments. Enterprises that anticipate these requirements and strengthen their foundational wireless infrastructure today will be able to adopt today’s existing automation and AI capabilities and be ready to scale for next-generation capabilities when they arrive.”
Sateesh Seetharamiah, CEO of Infosys subsidiary, EdgeVerve says that manufacturing and industrial companies are thus looking to implement AI and automation technologies to accelerate their digital transformation journey. He suggests that forward-thinking companies will use AI, automation and advanced analytics to improve operational efficiency, reduce labour costs and enable better decision-making, thereby revolutionising the manufacturing industry landscape.
“Manufacturing companies on the fence about implementing disruptive technologies risk missing out on enormous opportunities to create digital experiences for their customers,” said Seetharamiah. “With the advent of new technologies in generative AI and the like, decision-makers are keen to understand how to leverage emerging tech to differentiate their firms. However, without a defined set of use cases and outcomes in how such capabilities need to drive outcomes in the business, firms will be stuck without a clear strategy to prioritise the right emerging tech capabilities for business success.”
From AI-first to AI-smart
Nutanix CEO Rajiv Ramaswami concludes the first part of our AI analysis by saying that businesses will move from AI-first to AI-smart.
“Many organisations dove headfirst to AI without thinking about the consequences and anticipating the real business use cases. Just like we saw with the initial rush to cloud-first adoption, enterprises are going to re-evaluate their technology stacks and truly see where AI makes sense,” advised Ramaswami. “We need to realise that AI applications have become business-critical more quickly than any other applications we’ve ever seen. In 2026, we’re going to see organisations integrate AI into their enterprise IT and explore three areas: business resiliency, Day Two operations and security.”
It seems clear that we’re on a real tipping point with AI as we go into 2026. There is more to discuss here, so for part two, click here.