Research-grounded perspectives on the EU AI Act, governance design, and the psychology of responsible AI adoption. Written for practitioners, not academics.
The EU AI Act's human oversight requirements go far beyond documented processes. Article 14 requires that natural persons assigned to human oversight understand the system's capabilities and limitations, are aware of automation bias, and can correctly interpret the system's output. Article 26 places parallel obligations on deployers — organisations using AI systems — to ensure these conditions are met in practice.
Article 4 — enforceable since February 2025 — requires a sufficient level of AI literacy for all staff dealing with AI systems. This is not a generic awareness standard. It is a role-differentiated obligation that must account for the person's technical knowledge, experience, education, and the context in which AI systems are used.
Most organisations have interpreted these articles as documentation requirements. They are not. They are capability requirements — and the distinction is consequential.
"The question is not whether you have assigned human oversight. The question is whether the humans you have assigned can actually exercise it."
Article 14 requires that human oversight be assigned to persons who understand the system's capabilities and limitations, can correctly interpret outputs, and can decide not to use the system or override its output. Understanding is not awareness. It is a demonstrable competence.
Deployers must assign human oversight to natural persons with the necessary competence, training, and authority. They must ensure those persons understand the system's intended purpose and can monitor for risks. This is an operational obligation — not a policy one.
Enforceable since February 2025, Article 4 requires a sufficient level of AI literacy — differentiated by role, context, and the specific AI systems in use. Generic e-learning does not meet this standard.
Most organisations invest heavily in technical controls and compliance mechanisms. The layer that consistently fails under pressure is the one that is never measured.
Read the analysis →Psychological safety is treated as a culture topic. It is, in fact, a governance condition — one that determines whether escalation, challenge, and override are possible at all.
Read the analysis →Most EU AI Act attention focuses on providers — the organisations building AI systems. Article 26 places equally significant obligations on deployers — the organisations using them.
Read the analysis →Automation bias — the tendency to favour AI-generated outputs over human judgement — is not a training problem. It is a measurable psychological phenomenon.
Read the analysis →Most AI literacy programmes are designed around awareness. The EU AI Act requires something different: a sufficient level of literacy that enables people to exercise oversight.
Read the analysis →When a high-risk AI system malfunctions or produces an incident, deployers must notify the relevant authority within 72 hours. Most organisations are not ready.
Read the analysis →Receive new insights on AI governance, the EU AI Act, and human oversight — written for practitioners and decision-makers. No spam, unsubscribe at any time.
By subscribing, you agree to receive occasional emails from The Responsible AI Center. We respect your privacy and will never share your information. You can unsubscribe at any time.
Our work is grounded in the regulatory texts and governance standards that define the obligations our clients face. These are the primary sources we work to.
The full text of Regulation (EU) 2024/1689 laying down harmonised rules on artificial intelligence.
EU AI Office →The international standard for AI management systems. Clauses 6.2, 7.3, and 8.4 are directly relevant to our governance diagnostic methodology.
ISO Standard →The EU body responsible for implementing and enforcing the AI Act. The primary source for regulatory guidance, codes of practice, and enforcement updates.
AI Office →International policy frameworks and principles for trustworthy AI. Provides the broader governance context within which the EU AI Act operates.
OECD AI →Every engagement begins with a Discovery Conversation. Just an honest exchange about where your organisation stands and whether we are the right fit to help.
Book a Discovery Conversation