Insights & Analysis

Thinking on AI governance and human oversight

Research-grounded perspectives on the EU AI Act, governance design, and the psychology of responsible AI adoption. Written for practitioners, not academics.

Featured
Analysis

Perspectives on governance, readiness, and the human dimension

Governance

The four-layer model: why Layer 4 is the one that fails first

Most organisations invest heavily in technical controls and compliance mechanisms. The layer that consistently fails under pressure is the one that is never measured.

Read the analysis →
Research

Measuring the unmeasurable: psychological safety as a governance condition

Psychological safety is treated as a culture topic. It is, in fact, a governance condition — one that determines whether escalation, challenge, and override are possible at all.

Read the analysis →
Regulation

Why deployer obligations are the most underestimated risk in the EU AI Act

Most EU AI Act attention focuses on providers — the organisations building AI systems. Article 26 places equally significant obligations on deployers — the organisations using them.

Read the analysis →
Practice

Automation bias: the cognitive threat your governance framework doesn't address

Automation bias — the tendency to favour AI-generated outputs over human judgement — is not a training problem. It is a measurable psychological phenomenon.

Read the analysis →
Governance

From awareness to readiness: why AI literacy programmes fail

Most AI literacy programmes are designed around awareness. The EU AI Act requires something different: a sufficient level of literacy that enables people to exercise oversight.

Read the analysis →
Regulation

The 72-hour notification requirement: what deployers need to have in place

When a high-risk AI system malfunctions or produces an incident, deployers must notify the relevant authority within 72 hours. Most organisations are not ready.

Read the analysis →

Stay informed

Receive new insights on AI governance, the EU AI Act, and human oversight — written for practitioners and decision-makers. No spam, unsubscribe at any time.

By subscribing, you agree to receive occasional emails from The Responsible AI Center. We respect your privacy and will never share your information. You can unsubscribe at any time.

Regulatory Reference

Key regulatory sources and standards

Our work is grounded in the regulatory texts and governance standards that define the obligations our clients face. These are the primary sources we work to.

EU AI Act — Official Text

The full text of Regulation (EU) 2024/1689 laying down harmonised rules on artificial intelligence.

EU AI Office →

ISO/IEC 42001:2023

The international standard for AI management systems. Clauses 6.2, 7.3, and 8.4 are directly relevant to our governance diagnostic methodology.

ISO Standard →

AI Office — European Commission

The EU body responsible for implementing and enforcing the AI Act. The primary source for regulatory guidance, codes of practice, and enforcement updates.

AI Office →

OECD AI Policy Observatory

International policy frameworks and principles for trustworthy AI. Provides the broader governance context within which the EU AI Act operates.

OECD AI →

Start with a conversation.

Every engagement begins with a Discovery Conversation. Just an honest exchange about where your organisation stands and whether we are the right fit to help.

Book a Discovery Conversation