Workday is well prepared for the EU AI Act, who will follow?

Workday is well prepared for the EU AI Act, who will follow?

Many organizations are facing a challenging deadline on August 2. The rollout of the EU AI Act, which takes effect on that date, is no longer an issue for Workday. The company states that its ISO 42001 certification and compliance with a critical NIST framework provide sufficient assurance. Today, at the Dublin-based Innovation event, Workday is unveiling the governance approach that enables the platform’s customers to comply with AI legislation as well.

We spoke with Chandler Morse, Chief Corporate Affairs Officer at Workday, on our visit to the company’s Dublin office. He explains that Workday has been actively engaged in discussions with EU policymakers regarding AI legislation since 2019. It is therefore not surprising that this particular software provider is well prepared for the new law. Workday has consistently helped strike the right balance between innovation and risk mitigation.

The time pressure surrounding the EU AI Act is palpable, even for the lawmakers themselves. In March, the European Parliament approved amendments due to time constraints for implementation. Workday’s AI management system has already been thoroughly tested to ensure compliance with ISO 42001. It is the first global standard for AI management as we know it in 2026. By complying with this standard, Workday can legitimately claim to adhere to the guidelines in the NIST AI Risk Management Framework.

Actual impact

Beyond simply complying with legislation, you want the spirit of those laws to be reflected in the software products that must be compliant. Morse states that Workday is striving to achieve the goals of sovereignty in a technical sense. This boils down to control and transparency regarding the location, management, and processing of business data.

“Human-in-the-loop” is the classic term for the use of AI with certain safeguards; Workday also explicitly mentions this. Consider agents that do not simply perform an action but explicitly request approval. These may still involve advanced, multi-step actions, but within Workday, such actions are subject to this restriction by default. That does not mean, however, that autonomy should be equated with isolation, according to Morse.

Workday has been laying the foundation for this safe approach for several years, states Chief Legal Officer Rich Sauer.. Since 2022, Workday has been running a so-called “dedicated responsible AI” program. The risk assessment is directly linked to the Annex III categories of the EU AI Act. In non-legal terms, these are the sections of the legislation that define high-risk AI, including the requirement for demonstrable human intervention in key decisions made by software systems.

Based on these regulations, Workday applies a layered control framework. The intensity of measures is tailored to the potential impact of an AI system, which can vary significantly. But Morse points to something more overarching: “We believe that smart AI safeguards are crucial for building trust, and we don’t think they’re inconsistent with driving innovation.”

Human Oversight and Traceability

Workday has also enhanced logging and traceability for audits, incident response, and monitoring. Although this involves measuring basic IT actions, these are increasingly being performed by AI agents.

But Workday also tracks people themselves—specifically, their AI usage. So-called “champions” will lead the way among the workforce in promoting best practices. For customers, there will also be a collaborative aspect to learning how to work with AI. AI Fact Sheets, in-app notifications within Workday, and AI disclosures are designed to drive transparency without discouraging AI use.

For employees, Workday has rolled out the EverydayAI program, including a network of AI champions. Customers receive AI Fact Sheets, in-product notifications, and generative AI disclosures. Further developments will undoubtedly follow. Earlier this year, Workday integrated the AI tool Sana—acquired for $1.1 billion—into its platform within four months.

Workday clarifies its position

Workday advocates for better guidance on classifying high-risk AI. That aspect of the EU AI Act is also set in stone, Morse notes. The company wants a single, consistent enforcement mechanism through the EU AI Office, without additional overlapping regulations. That will remain a challenge for Brussels, especially since AI actually runs—and will continue to run—across the entire spectrum of IT systems. Consider GDPR considerations for the use of personal data by AI agents or the use of AI to enhance security levels and thereby comply with cybersecurity legislation.

Workday itself states that its platform is therefore more than ready for AI legislation. The IT industry as a whole does not appear to be, or cannot demonstrate this in the same way with an ISO certificate. Morse does not entirely agree with this assessment. Rather, he points out that the degree of readiness and compliance is a competitive advantage, one of the “rare ways to differentiate yourself.”

One could argue that software companies could not inherently be ready for the EU AI Act because it is highly volatile. Nevertheless, Morse is positive about the fundamental nature of the legislation, which he believes has not changed arbitrarily. We argue that AI systems are evolving too quickly for consistent legislation that was primarily drafted in 2023. Three years have passed, but Morse believes that the emphasis on risk management—and thus the “currency of trust”—has remained the same.