On 2 August 2026, the EU AI Act's core requirements for high-risk AI systems become fully enforceable. If your organisation uses AI in hiring, performance management, scheduling, or skills assessment - and most do - you are likely running high-risk AI systems under the Act's definition. The clock is ticking.

A Brief Timeline

What Counts as High-Risk AI in HR

Under the Act, any AI system involved in the following is classified as high-risk:

This is not an edge case. It captures the vast majority of modern HR technology stacks - ATSs, performance management platforms, AI-assisted interview tools, and workforce analytics software.

What You Must Do by August 2026

  1. Notify worker representatives. Article 26(7) requires employers to inform employee representatives before deploying any high-risk AI system in the workplace. If you have not done this, you need to start now.
  2. Designate human oversight. You must identify specific individuals with the authority and practical ability to intervene in, override, or suspend AI system outputs. The oversight cannot be nominal - it must be real and documented.
  3. Implement ongoing monitoring. High-risk AI systems must be monitored for discrimination, adverse impacts, and performance degradation. If issues are identified, there is an obligation to suspend the system.
  4. Establish data governance and audit trails. Full documentation of training data, model decisions, and output records is required. This is substantially more demanding than most current HR data practices.
  5. Register your systems. High-risk AI systems must be registered in the EU AI database.

"Penalties for the most serious violations reach €35 million or 7% of global annual turnover - exceeding even GDPR maximums."

The Extraterritorial Reach

US and non-EU companies are not exempt. If your organisation recruits EU-based candidates, manages EU employees through global HR platforms, or deploys AI tools that EU-based workers interact with, the Act applies to you. The "Brussels Effect" - the tendency for EU regulation to set the de facto global standard - means most multinationals will end up applying these standards everywhere.

The Readiness Gap

Over half of organisations currently lack a systematic inventory of the AI systems they have in production. Many do not know which of their tools qualify as high-risk under the Act's definitions. This is where the risk is concentrated - not in the systems organisations are aware of, but in the tools embedded in everyday HR workflows that nobody has formally classified.

What To Do Right Now

Three immediate steps:

  1. Audit your HR tech stack. List every tool that makes or influences decisions about people - hiring, performance, scheduling, learning recommendations. Classify each against the Act's high-risk criteria.
  2. Assess your human oversight structures. For each high-risk system, identify who is responsible for reviewing and, if necessary, overriding its outputs. Document this formally.
  3. Engage worker representatives. Do not leave this to the last moment. The notification requirement is procedural - but the conversation it opens can be substantive and takes time.

The EU AI Act is not theoretical. It is law, with a specific date, significant penalties, and extraterritorial reach. For HR teams that have been watching AI regulation from the sidelines, August 2026 is the deadline that ends the waiting.