6 min Applications

Visma turns AI from hype to value for business software

Visma turns AI from hype to value for business software

Visma is continually exploring how artificial intelligence is used within business software, striking a delicate balance between central coordination and local autonomy. At the core of this transformation is a dedicated AI team driving adoption internally, enabling product innovation, and supporting the company’s diverse portfolio across Europe and Latin America. From chatbots, generative AI, and agentic AI: the progress is going fast for business software.

As AI matures, Visma is moving steadily from experimentation to scale. According to Jacob Nyman, AI Director at Visma, the key is not to dictate a rigid strategy from the top, but to create a framework in which innovation can flourish. “It’s really not wise to be too rigid or pretend to know exactly how to do things from a central perspective,” he emphasizes.

Visma’s AI transformation revolves around four interconnected shifts. First is building an AI-native workforce, where employees embed AI into daily work to amplify skills and productivity. Second comes AI-native product development, which uses AI throughout the software development lifecycle, from prototyping to coding assistance, to speed up delivery and enhance quality. Third is the creation of AI-native products, delivering intelligent capabilities directly to customers. Finally, AI-native growth functions harness AI to elevate customer-facing operations such as support, sales, and marketing, driving both efficiency and stronger customer experiences.

This structure ensures that AI is not only used internally but also embedded into offerings and aligned with sector-specific expertise. Nyman highlights that each layer strengthens the others. Internal skills provide confidence to innovate in products, while successful customer implementations inspire employees to embrace AI even more.

Developer-first adoption as foundation

Visma’s most successful adoption case so far is among its developer community. The company initially saw 5 to 7 per cent usage of tools like GitHub Copilot. Today, nearly every technical employee uses AI-assisted coding on a daily basis. Developers also experiment with alternatives such as Cursor and Windsurf, proving that variety in tooling helps accelerate adoption.

The logic is straightforward: if developers build with AI, the lifecycle from design to delivery benefits. This is not limited to writing code. Customer-facing teams, too, rely on AI to improve service delivery, with customer support being a standout area. In parallel, employees across business functions use Google’s Workspace ecosystem, including Gemini and AI Studio, for productivity and specialized tasks.

By embedding AI across both technical and operational layers, Visma ensures that AI is not just an isolated experiment, but a horizontal capability that supports the entire organization.

A front row seat to hundreds of big AI bets

One of Visma’s unique challenges is managing AI across its companies. Globally, over a hundred software companies are part of the group. Each operates in markets with different regulatory requirements, customer expectations, and cultural contexts. Centralization could stifle innovation but leaving everything fully local risks fragmentation.

The solution lies in central enablement with local decision-making. The AI-team ensures access to APIs, tools, and vendor partnerships, while also providing security and compliance guidance. But the responsibility for deciding how to use AI rests with each company. “We are a group which together we are big, and we can have partnerships with very good global leaders within AI,” explains Nyman. “But we don’t want to override local decisions. Our companies’ local presence in regulated markets is a competitive advantage.”

This federated approach ensures that AI adoption fits the local business context while maintaining trust and compliance standards across the group.

-Text continues after the box below-

Enablement, acceleration, optimization

Visma frames its transformation in three distinct phases.

  1. Enablement – laying the groundwork by ensuring access to tools and strong vendor partnerships with favorable terms.
  2. Acceleration – capturing successful practices from individual companies and distributing them across the group to speed up adoption.
  3. Optimization – consolidating around frameworks and technologies that prove most effective, while resisting the temptation of premature optimization in such a fast-moving field.

This phased approach ensures steady progress while leaving room for adaptation. It also illustrates Visma’s recognition that transformation is not a one-time project but a continuous cycle of learning and scaling.

The next frontier

One of the strongest signals of AI maturity is the shift from simple assistants to autonomous agents. Where chatbots once answered questions, agents now handle workflows spanning multiple systems. Nyman explains this as a natural progression: “From chatbots to copilots, to assistants, to agents.” Agents became a central theme across enterprise technology in 2024. Businesses want automation that goes beyond support queries and can execute entire processes. Yet, as Nyman points out, this shift is not just about branding existing tools differently. “We don’t want to pretend to have agents, we want to build agents,” he says.

The difference between hype and reality lies in engineering. True agents require infrastructure that enables dynamic interaction with multiple systems, which would be clunky to manage with traditional APIs and integrations alone. Emerging protocols like the Model Context Protocol (MCP) are therefore gaining attention, as they allow more sophisticated context engineering.

Reliability issues compound engineering challenges. Giving agents autonomy introduces new risks. “When they get the freedom to do more stuff, they can also do more things wrong,” Nyman admits. Human oversight remains essential for critical decisions, even as agents automate routine tasks. This hybrid balance of autonomy and reliability defines today’s most effective deployments.

Agents are here to stay

Visma’s experience confirms that successful AI agents are not built on generic intelligence but on deep domain knowledge. In Norway, for example, lawyers worked closely with AI developers to ensure that tax and accounting agents reflected precise legal expertise. In France, teams embedded region-specific regulatory knowledge directly into applications.

This principle extends to every industry vertical. The closer AI agents are tied to specialized knowledge, the more effective they become. Generic agents may handle surface-level tasks, but real differentiation comes from encoding the expertise that companies already possess.

Despite their current limitations, agents represent more than a passing trend. Several factors drive their staying power. Geopolitical competition ensures that nations invest heavily in AI. The human brain itself proves that efficient, general intelligence is possible. And most importantly, agents provide an intuitive abstraction for businesses. They can be organized, trained, and managed like employees.

“Agents strike this perfect abstraction level where they are limited enough to be designed but capable enough to do amazing stuff,” Nyman reflects. This framing resonates with business users and accelerates adoption, bridging the gap between technical innovation and everyday workflows.

A pragmatic vision of AI in software

Visma’s AI journey shows that enterprise adoption is not about chasing the latest trend but about building trust through enablement, experimentation, and careful scaling. From developer-first adoption to open-source platforms and from local autonomy to central partnerships, every step reflects a pragmatic balance between innovation and reliability.

As agents move from hype to production reality, companies that combine technical engineering with domain knowledge will separate themselves from the noise. For Visma, the long-term vision is clear. AI is not just a feature bolted onto products but a capability woven into the company’s structure. Centralized where it makes sense, localized where it matters.

Tip: Domain-specific AI beats general models in business applications