← Back to Insights
Regulation

Why deployer obligations are the most underestimated risk in the EU AI Act

Most EU AI Act attention focuses on providers — the organisations building AI systems. Article 26 places equally significant obligations on deployers — the organisations using them.

Most EU AI Act attention focuses on providers — the organisations building AI systems. Article 26 places equally significant obligations on deployers — the organisations using them. Most deployers have not yet begun to assess their readiness for these obligations.

The distinction between provider and deployer is fundamental to the EU AI Act's regulatory architecture. Providers build AI systems. Deployers use them. The obligations are different, but equally consequential — and for most organisations, the deployer obligations are the ones that will apply.

What Article 26 Requires

Deployers of high-risk AI systems must implement appropriate technical and organisational measures to ensure they use such systems in accordance with the instructions of use. They must assign human oversight to natural persons who have the necessary competence, training, and authority.

Deployers must monitor the operation of the high-risk AI system on the basis of the instructions of use and inform the provider or distributor of any risks. When the deployer has reason to consider that the use of the system may present a risk, they must suspend use and inform the provider.

Critically, deployers must also inform workers and their representatives that they will be subject to the use of high-risk AI systems. This transparency obligation extends beyond simple notification to the conditions under which AI-assisted decisions are made.

The 72-Hour Notification Requirement

When a high-risk AI system malfunctions or produces an incident, deployers must notify the relevant authority within 72 hours. Most organisations do not yet have the detection, escalation, and reporting infrastructure to meet this obligation reliably.

This requirement demands not just incident response procedures, but the human capacity to detect incidents in the first place. If the people operating AI systems cannot recognise when an output is anomalous, incorrect, or harmful, the 72-hour clock never starts — and the organisation is in breach without knowing it.

Why Deployers Are Unprepared

Most organisations have focused their EU AI Act preparation on provider-side obligations — model documentation, risk assessment, and technical compliance. Deployer obligations have received comparatively little attention, despite affecting a far larger number of organisations.

The deployer readiness gap is particularly acute in human oversight. Most deployers have not assessed whether the people assigned to oversight roles possess the competence, training, and authority the Act requires. They have not measured whether those people can exercise genuine oversight — as opposed to rubber-stamping AI outputs.

The Responsible AI Center's Deployer Readiness service is designed to close this gap. Beginning with the Governance Diagnostic, we identify where deployer obligations are not being met at the human level — and design the compliance programme that addresses them.

Share this article
Related Insights

Continue reading

Start with a conversation.

Every engagement begins with a Discovery Conversation. Just an honest exchange about where your organisation stands and whether we are the right fit to help.

Book a Discovery Conversation