On 2 August 2026, the EU AI Act's core requirements for high-risk AI systems become fully enforceable. If your organisation uses AI in hiring, performance management, scheduling, or skills assessment - and most do - you are likely running high-risk AI systems under the Act's definition. The clock is ticking.
A Brief Timeline
- August 2024: EU AI Act enters into force
- February 2025: Prohibited practices banned; AI literacy obligations take effect immediately
- August 2025: Obligations for general-purpose AI models apply
- August 2026: Full enforcement for high-risk AI systems - the deadline that matters for HR
What Counts as High-Risk AI in HR
Under the Act, any AI system involved in the following is classified as high-risk:
- Recruitment and candidate screening
- CV ranking and filtering
- Interview analysis (including video)
- Skills testing and assessment
- Performance evaluation
- Promotion and termination decisions
- Scheduling and task allocation
This is not an edge case. It captures the vast majority of modern HR technology stacks - ATSs, performance management platforms, AI-assisted interview tools, and workforce analytics software.
What You Must Do by August 2026
- Notify worker representatives. Article 26(7) requires employers to inform employee representatives before deploying any high-risk AI system in the workplace. If you have not done this, you need to start now.
- Designate human oversight. You must identify specific individuals with the authority and practical ability to intervene in, override, or suspend AI system outputs. The oversight cannot be nominal - it must be real and documented.
- Implement ongoing monitoring. High-risk AI systems must be monitored for discrimination, adverse impacts, and performance degradation. If issues are identified, there is an obligation to suspend the system.
- Establish data governance and audit trails. Full documentation of training data, model decisions, and output records is required. This is substantially more demanding than most current HR data practices.
- Register your systems. High-risk AI systems must be registered in the EU AI database.
"Penalties for the most serious violations reach €35 million or 7% of global annual turnover - exceeding even GDPR maximums."
The Extraterritorial Reach
US and non-EU companies are not exempt. If your organisation recruits EU-based candidates, manages EU employees through global HR platforms, or deploys AI tools that EU-based workers interact with, the Act applies to you. The "Brussels Effect" - the tendency for EU regulation to set the de facto global standard - means most multinationals will end up applying these standards everywhere.
The Readiness Gap
Over half of organisations currently lack a systematic inventory of the AI systems they have in production. Many do not know which of their tools qualify as high-risk under the Act's definitions. This is where the risk is concentrated - not in the systems organisations are aware of, but in the tools embedded in everyday HR workflows that nobody has formally classified.
What To Do Right Now
Three immediate steps:
- Audit your HR tech stack. List every tool that makes or influences decisions about people - hiring, performance, scheduling, learning recommendations. Classify each against the Act's high-risk criteria.
- Assess your human oversight structures. For each high-risk system, identify who is responsible for reviewing and, if necessary, overriding its outputs. Document this formally.
- Engage worker representatives. Do not leave this to the last moment. The notification requirement is procedural - but the conversation it opens can be substantive and takes time.
The EU AI Act is not theoretical. It is law, with a specific date, significant penalties, and extraterritorial reach. For HR teams that have been watching AI regulation from the sidelines, August 2026 is the deadline that ends the waiting.