EU AI Act

Your organisation is deploying AI.
But can your people actually govern it?

The EU AI Act requires demonstrable human oversight of AI systems. Most organisations have built the frameworks and policies. Few have measured whether their people are psychologically equipped to use them when it matters.

EU AI Act Aligned ISO/IEC 42001:2023 Aligned
Four-Layer AI Governance Architecture
Layer 1 — What the system does
Model monitoring & technical controls
Layer 2 — What the organisation requires
Policies, documentation & risk registers
Layer 3 — Who is accountable
Decision rights & escalation structures
Layer 4 — Whether people can actually act
Human readiness & oversight in practice
Our Focus

If your organisation deploys AI systems that affect people — in financial services, healthcare, or the public sector — and you need to demonstrate that the people responsible for oversight can actually exercise it, this is for you.

The Challenge

Compliance programmes build processes.
They don't build the people who run them.

Organisations are investing heavily in AI governance frameworks, policies, and training — yet remain fundamentally blind to whether their leaders and teams feel safe to challenge AI outputs, escalate concerns, or override systems when it matters.

Traditional maturity assessments tell you where you stand on a generic five-stage model. They don't tell you whether your people can exercise the judgement that the EU AI Act explicitly demands.

The gap is not awareness. The gap is measurable readiness. And readiness — unlike awareness — can be assessed, quantified, and acted upon.

The problem is not that psychological and cultural factors are unmeasurable. The problem is that they have not yet been translated into decision-relevant, governance-ready indicators. That is precisely what we do.

Why the gap persists

Automation bias is invisible

People trust AI outputs more than their own judgement — not because they are told to, but because it is cognitively easier. This is a measurable psychological phenomenon, not a training gap.

Psychological safety is assumed

Most organisations assume their people feel safe to challenge AI. Very few have measured it. In practice, the conditions that enable genuine challenge are rare — and fragile.

Governance stops at documentation

Policies describe what should happen. They do not predict what will happen when a human operator faces a high-confidence AI recommendation under time pressure.

Readiness has never been measured

Until now, there has been no structured, practitioner-ready way to assess whether the people in oversight roles are genuinely equipped to exercise the judgement the EU AI Act requires.

EU AI Act Timeline

The compliance window is narrowing

The EU AI Act's obligations are arriving in phases. For organisations deploying high-risk AI systems, the most consequential deadline is August 2026 — and the work required to meet it cannot be compressed into the final months.

February 2025
AI Literacy Obligation

Article 4 is enforceable

Organisations must ensure a sufficient level of AI literacy for all staff dealing with AI systems. Generic e-learning will not meet this standard.

August 2025
Prohibited AI Practices

Certain AI practices become prohibited

Organisations must have identified and discontinued any systems that fall within the Act's prohibition categories.

August 2026 → Proposed: December 2027
Full High-Risk Compliance

Complete compliance required — date under revision

Original deadline: August 2026. The Digital Omnibus proposal is currently in EU trilogue — dates remain subject to change.

MEPs propose 2 December 2027 for high-risk AI systems listed in the regulation (biometrics, critical infrastructure, education, employment, essential services, law enforcement, justice and border management).

⚠ Not yet confirmed. Organisations should continue preparing against August 2026.

How We Help

From diagnosis to embedded capability

We follow a structured methodology that begins with measurement and ends with demonstrable governance capability. Every step is grounded in evidence — not assumption.

01
Discover

Discovery Conversation

A structured conversation to understand your current governance posture, key risks, and whether our diagnostic approach is the right fit.

02
Diagnose

Governance Diagnostic

A structured, research-informed assessment of whether the people in your organisation are genuinely equipped to exercise AI oversight.

03
Advise

Advisory & Architecture

Based on what the diagnostic finds, we advise on the specific interventions required — deployer readiness, literacy architecture, or governance redesign.

04
Embed

Implementation & Review

We support implementation, measure progress through re-assessment, and ensure governance capability outlasts any single programme.

Our Approach

Four layers of AI governance. Only one is about people.

AI governance conversations tend to stop at policy and process. The question most organisations haven't yet asked is whether the people responsible for oversight can actually exercise it — in real conditions, under real pressure, against the pull of systems that are designed to be trusted.

Layer 1

What the system does

The technical infrastructure surrounding AI — how models are monitored, how data is governed, how outputs are tested and audited before they reach decision-makers.

Layer 2

What the organisation requires

The documented obligations — policies, registers, reporting lines, and regulatory filings. The paper record of how governance is supposed to work.

Layer 3

Who is accountable

The authority structures that determine who owns which decisions, who can escalate, and who is ultimately responsible when something goes wrong. Roles on paper.

Layer 4

Whether people can actually act

The human dimension — whether the individuals in those roles are genuinely equipped to question, override, escalate, and take responsibility. This is the layer most organisations have never assessed. It is also the layer regulators are beginning to require.

Where we work — Art. 4, 14, 26
Our Services

One entry point. One honest question.

Can your people actually govern AI — and can you prove it? The Diagnostic answers that question. What we do next depends entirely on what it finds. We do not arrive with a pre-set agenda. We arrive with a structured way to find out the truth.

Not a training provider

We do not deliver e-learning or awareness programmes. We specify the architecture others build — and measure whether it is producing genuine capability.

Not a maturity model

We don't place you at "Level 2" on a generic scale. We identify specific gaps, in specific roles, against specific regulatory obligations — then tell you what to do about them.

Not a compliance checkbox

Documentation tells a regulator what you intended. Our work tells you — and them — whether your organisation can actually deliver it. That is the distinction the EU AI Act enforces.

The entry point — always first
Powered by HOCS

Governance Diagnostic

A structured assessment of whether the people in your organisation are genuinely equipped to exercise AI oversight — not in theory, but in the conditions they actually face. We examine the attitudes and behaviours that research consistently links to governance effectiveness: the willingness to question, the readiness to escalate, the sense of personal accountability for decisions made with AI.

The output is not a maturity score. It is a clear picture of where human oversight is likely to hold — and where it is likely to fail — mapped against what the EU AI Act requires your organisation to demonstrate.

Book a Discovery Conversation →
What the diagnostic produces
  • A role-specific readiness profile across your oversight-critical population
  • Clear identification of where oversight is likely to hold and where it is likely to fail
  • A prioritised advisory agenda — the specific interventions the evidence recommends
  • Regulatory-ready documentation aligned to Articles 4, 14, and 26

Advisory services — determined by what the diagnostic finds

Service 02

EU AI Act Deployer Readiness

Most EU AI Act attention focuses on providers — the organisations building AI systems. Article 26 places equally significant obligations on deployers — the organisations using them. Most are not ready.

When the diagnostic reveals gaps in human oversight assignment, vendor governance, or incident response capability — we advise on the deployer compliance programme that closes them: obligation mapping, oversight role specification for each high-risk system, vendor due diligence framework, and an incident response protocol aligned to the Act's 72-hour notification requirement.

Service 03

AI Literacy Architecture

Article 4 has been enforceable since February 2025. It requires a sufficient level of AI literacy — a standard that generic e-learning cannot meet, and that most organisations cannot yet demonstrate.

When the diagnostic reveals that literacy gaps are undermining the capacity to challenge, escalate, or override — we specify the role-differentiated architecture your L&D team builds against: needs analysis by role, tiered programme specification, and an effectiveness measurement framework. We specify it. You build it.

Service 04

Retained Advisory

AI governance is not a one-time project. Regulation develops. AI systems evolve. People move. The human factors that determine oversight effectiveness change as organisations restructure.

Most advisory relationships end when the report is delivered. Ours are designed to continue — with periodic re-assessment, governance health checks, emerging regulatory interpretation, and on-call support when something happens that can't wait.

Requires a prior engagement
Why Work With Us

Rigorous. Independent. Evidence-led.

The Responsible AI Center is an independent advisory firm. We are not reselling a platform, certifying a standard, or advocating a vendor. We are here to give your organisation an honest picture of its human oversight capacity — and a practical path to improve it.

Measurement before recommendation

We diagnose before we prescribe. No engagement begins with a pre-packaged solution. Our governance diagnostic produces the evidence that every subsequent recommendation is built on.

Regulatory precision

We work to Articles 4, 14, and 26 of the EU AI Act and ISO/IEC 42001:2023 Clauses 6.2, 7.3, and 8.4. Our outputs are designed to support audit committee reporting and regulatory scrutiny.

Sector-specific expertise

We work across financial services, healthcare, public sector, and professional services — wherever high-risk AI is deployed and human oversight is a regulatory obligation.

Intervention that sticks

We embed governance capability to outlast any single programme. Re-assessment cycles, embedded accountability, and governance mechanisms that survive leadership changes.

Built for the deadline

Our methodology is calibrated for organisations that need to demonstrate human oversight capacity within a constrained window — without sacrificing rigour.

About

Independent.
Evidence-led. Built for the practitioner.

The Responsible AI Center is an AI governance diagnostics and advisory firm based in Brussels. We work with organisations subject to the EU AI Act who need to demonstrate not just technical compliance, but that their people are psychologically equipped to exercise genuine human oversight of AI systems.

Our focus is the gap between what organisations have built — frameworks, policies, technical controls — and whether the people operating those structures are actually ready to use them. That gap is assessable. It is quantifiable. And it is the gap regulators will scrutinise.

People govern AI. We make sure they can.

Mulya van Roon
Mulya van Roon
Founder & Principal Advisor

Mulya works at the intersection of enterprise technology, risk advisory, and AI governance. His practice focuses on helping boards, executives, and control functions turn regulatory obligation into operational capability — embedding responsible AI into the way organisations design, deploy, and oversee intelligent systems.

He brings almost two decades of experience from KPMG, IBM/Kyndryl and Microsoft, combined with deep specialisation in the EU AI Act, ISO/IEC 42001:2023, and human oversight frameworks. His work spans highly regulated sectors like financial services, healthcare and the public sector.

Mulya collaborates with academic partners on AI governance research. Based in Brussels, he advises organisations across the EU.

Research

Research-informed. Developed with academic partners.

Our advisory approach is informed by ongoing research into the psychological and organisational conditions that determine whether AI governance works in practice. We collaborate with academic partners to develop the constructs that inform our diagnostic tools.

Psychological safety as a governance condition

Automation bias susceptibility in oversight roles

Construct measurement for AI oversight readiness

Translation of regulatory obligation into observable behavioural indicators

This research is not academic for its own sake. It exists to ensure that every recommendation we make is informed by structured inquiry — not assumption, not convention, and not what worked in a different regulatory context.

Insights

Thinking on AI governance and human oversight

Research-grounded perspectives on the EU AI Act, governance design, and the psychology of responsible AI adoption. Written for practitioners, not academics.

Collaborations & Partnerships

Independent by design. Collaborative by nature.

The Responsible AI Center works with a small number of academic, institutional, and practitioner partners whose work is directly relevant to the governance challenges we address.

Academic Collaboration

Research in AI Governance & Psychological Readiness

Our governance diagnostics are informed by joint research into the psychological and organisational conditions required for responsible AI oversight. This work bridges academic inquiry and boardroom practice — grounding our approach in ongoing research while keeping it actionable for practitioners.

Psychological safety as a governance condition

Construct measurement for AI oversight readiness

Translation of regulatory obligation into observable behavioural indicators

View all collaborations →
Practitioner Network

Governance, Risk & Compliance Specialists

We maintain working relationships with a select group of GRC practitioners, legal advisors, and HR transformation specialists across the Netherlands, Belgium, and the wider EU. These relationships allow us to refer clients to complementary expertise — and to bring relevant perspectives into our own engagements where appropriate.

If you work in AI law, organisational psychology, or regulatory compliance and believe there is a meaningful basis for collaboration, we welcome the conversation.

Reach out to explore →
Principle 01

Substantive, not decorative

Every collaboration we enter into has a clear intellectual or practical purpose. We do not maintain partner lists for appearance — we work with people whose expertise makes our work sharper.

Principle 02

Independence preserved

Collaboration does not compromise our independence. We do not take referral fees, endorse products, or allow commercial relationships to influence our governance diagnostics or advisory positions.

Principle 03

Client interest first

When we refer clients to partner specialists, it is because those specialists are genuinely the right resource — not because of any commercial arrangement. Our clients' interests are the only criterion.

Get in Touch

Start with a conversation.

Every engagement begins with a Discovery Conversation. Just an honest exchange about where your organisation stands, what the EU AI Act requires, and whether we are the right fit to help close the gap.

Discovery Conversation

A 45-minute structured conversation to explore your current governance posture, key risks, and whether our diagnostic approach is the right fit.

Speaking & Collaboration

Invitations to speak at conferences, contribute to research, or explore advisory collaboration.

Location

Based in Brussels. Working with organisations across the EU and remotely.

Request a Discovery Conversation
Complete the form below and we will be in touch within two working days.
or email us directly at info@theraicenter.org

Message Received

Thank you for reaching out. We will be in touch within two working days.

Insights

Thinking on AI governance and human oversight

Research-grounded perspectives on the EU AI Act, governance design, and the psychology of responsible AI adoption. Written for practitioners, not academics.

Collaborations & Partnerships

Independent by design. Collaborative by nature.

The Responsible AI Center works with a small number of academic, institutional, and practitioner partners whose work is directly relevant to the governance challenges we address.

Our Approach to Collaboration

Partnerships that sharpen our work

We collaborate with academic researchers, governance specialists, and regulatory practitioners whose expertise directly strengthens the diagnostic and advisory work we deliver. Every partnership has a clear intellectual or practical purpose.

Principle 01

Purposeful by design

We value collaborations that have a clear intellectual or practical purpose. We look for partners whose expertise genuinely strengthens the diagnostic and advisory work we deliver — because the best partnerships are built on shared commitment, not appearances.

Principle 02

Independence preserved

We believe collaboration works best when it is free from commercial influence. We do not accept referral fees or endorse products, and we ensure that no partnership shapes our governance diagnostics or advisory positions. Our independence is what makes our work trustworthy.

Principle 03

Accountability runs through everything

Responsible AI is not a label — it is a practice. Every collaboration we enter reflects the same standard of rigour, transparency, and ethical commitment we bring to our own governance work. If a partnership cannot meet that standard, it does not proceed.

Academic Collaboration

Research in AI governance & psychological readiness

Our governance diagnostics are informed by joint research into the psychological and organisational conditions required for responsible AI oversight. This work bridges academic inquiry and boardroom practice.

Dr Joanna Michalska
Academic Collaboration

Dr Joanna Michalska

Founder, Ethica Group Ltd  ·  PhD in Enterprise Risk Management
↗ ethicagroup.ai

Dr Michalska is a strategic advisor, researcher, and former senior executive with international leadership experience in global financial services. She has held senior roles across enterprise risk, governance, and large-scale transformation within highly regulated institutions including JPMorgan and HSBC, advising boards, executive committees, and regulators.

Joint Research Focus

Structural accountability and decision architecture under AI-enabled execution

Authority delegation and escalation integrity in automated systems

Organisational psychology of executive intervention under scale and speed

Governance design that connects strategic risk to human oversight capacity

Joint Research Programme

Collaborative research examining the psychological and organisational factors that determine whether human oversight of AI systems is genuine or ceremonial. This programme aims to build the evidence base informing our governance diagnostic approach.

Automation bias and its impact on human oversight effectiveness

Psychological safety as a governance condition in AI-augmented decision-making

Role-differentiated AI literacy and its relationship to oversight capability

Organisational conditions that enable or inhibit genuine human challenge of AI outputs

Publication & Thought Leadership

Our research findings are published through academic channels and practitioner-focused publications.

Conference Presentations

Regular presentations at European AI governance conferences, sharing emerging research and practical perspectives with the governance community.

Practitioner Publications

Articles and analysis written for governance professionals, translating academic research into actionable guidance for regulated organisations.

Regulatory Engagement

Contributing to the development of regulatory guidance and codes of practice through consultation processes and expert input.

Practitioner Network

Governance, risk & compliance specialists

We maintain working relationships with a select group of GRC practitioners, legal advisors, and HR transformation specialists across Belgium and the wider EU.

Dr Joanna Michalska

Ethica Group Ltd

Dr Joanna Michalska — Founder
Board-level advisory · Decision architecture · Authority design

Ethica Group provides independent board-level advisory on structural accountability and decision architecture in organisations where execution increasingly relies on automated and AI-enabled systems.

Decision Architecture

Design of decision rights, delegation, and escalation.

Authority Design

Allocation of decision rights aligned to strategic direction.

Human Oversight Capacity

Structural conditions for effective executive intervention.

Executive Readiness

Leadership capability under scale, speed, and complexity.

Legal & Regulatory Advisory

Specialist legal counsel on EU AI Act compliance, regulatory interpretation, and enforcement preparation.

HR & Organisational Development

When our diagnostic reveals governance gaps rooted in organisational design, we connect clients with HR transformation specialists.

Technical AI Governance

For Layer 1 and Layer 2 support — model monitoring, data governance, technical documentation.

Risk & Compliance Functions

Working with internal audit, risk management, and compliance teams to integrate human oversight findings into GRC frameworks.

Work With Us

Opportunities for collaboration

We are always open to conversations with researchers, practitioners, and institutions whose work intersects with ours.

Academic Research

Joint research programmes, co-authored publications, and doctoral supervision in AI governance, human oversight, and the psychology of AI-augmented decision-making.

Conference & Speaking

Presentations, panel discussions, and keynotes at governance, AI ethics, and regulatory conferences across Europe. We speak from evidence, not opinion.

Regulatory Consultation

Contributing to the development of codes of practice, regulatory guidance, and standards through formal consultation processes and expert advisory roles.

Practitioner Partnership

Working relationships with GRC specialists, legal advisors, and HR professionals whose expertise complements our governance diagnostic and advisory work.

Interested in collaborating?

We welcome conversations with researchers, practitioners, and institutions whose work aligns with ours.

info@theraicenter.org
Legal

Privacy Policy

Last updated: January 2025

AI GRC RegTech · SaaS Platform
HOCS
Human Oversight Capacity Standard

Your governance framework exists.
But can your people actually use it?

Organisations invest heavily in AI governance structures — policies, controls, documentation. The question regulators now demand you answer is harder: are the individuals responsible for overseeing AI genuinely equipped to challenge outputs, escalate concerns, and intervene when it matters? HOCS measures exactly that — continuously, at scale, with an immutable audit trail.

Regulatory alignment
Every output mapped to Articles 4, 14, and 26 of the EU AI Act and ISO/IEC 42001:2023 Clauses 6.2, 7.3 and 8.4.
Sample readiness profile — illustrative
47 roles assessed · Q1 2026
HOCS Enterprise
Psychological Safety
38%
Conscious Ownership
64%
Critical Engagement
51%
Growth Orientation
76%
Adaptive Flexibility
69%
Pattern identified
Diffused Passivity
Art. 14 alignment
Conditions absent
HOCS — Human Oversight Capacity Standard
The Problem

Governance failures are rarely a framework problem.

When AI-enabled decisions go wrong, post-incident analysis almost never reveals that policies were absent. What it reveals is something harder to see on a compliance checklist: the people operating those structures lacked the psychological readiness to act.

The invisible gap

Every existing governance tool measures what the organisation has built. None of them measure whether the people operating those structures are psychologically equipped to do so. That is the gap HOCS closes.

What compliance tools measure

Frameworks in place. Policies documented. Controls implemented. Roles assigned. These are the structural artefacts of governance — necessary, but insufficient.

What they consistently miss

Whether the individual has the psychological safety to challenge an output. Whether they feel accountable enough to escalate. Whether they are genuinely prepared to intervene. This is Human Oversight Capacity — and it is unmeasured.

What the EU AI Act requires

Articles 4, 14, and 26 establish that human oversight is not a structural requirement — it is a behavioural one. Personnel must be literate, competent, and empowered to act. Demonstrating that requires evidence, not intent.

What regulators now ask

Not "do you have a governance policy?" but "can you demonstrate that the people responsible for oversight are actually capable of exercising it?" That requires a fundamentally different instrument.

The Platform

HOCS: an AI GRC RegTech SaaS platform — not a one-off survey.

HOCS continuously measures and audits Human Oversight Capacity — the psychological readiness of individuals and teams to challenge, escalate, and intervene in AI-enabled decisions. It identifies specific conditions that are present or absent in each team and maps every finding directly to the EU AI Act Article or ISO/IEC 42001:2023 clause that makes it operationally necessary.

Continuous, not point-in-time

HOCS tracks Human Oversight Capacity over time, flags deterioration before it becomes a governance incident, and benchmarks results against sector peers — providing the longitudinal evidence trail regulators require.

Regulatory mapping built in

Every output — the Article 4 compliance report, the ISO gap analysis, the Article 27 FRIA module, the AI Insights narrative, the intervention library — is mapped directly to the specific Article or clause that makes it a legal obligation.

Immutable audit trail

HOCS generates board-ready compliance documentation exportable for regulators and auditors. The audit trail is immutable — creating the evidential foundation that governance claims alone cannot provide.

SaaS GRC RegTech Continuous monitoring Regulator-ready output EU AI Act aligned
Where HOCS sits

The four-layer governance architecture

Existing governance tools address the first three layers. No tool — until HOCS — has addressed Layer 4: the human capacity to actually exercise the oversight those structures are designed to enable.

1

Technical controls

AI system architecture, algorithmic safeguards, monitoring systems.

Covered by platform providers — OneTrust, IBM OpenPages
2

Compliance mechanisms

Policies, documentation, audit trails, regulatory mapping.

Covered by compliance advisers and Big 4 engagements
3

Authority allocation

Roles, responsibilities, decision rights, escalation protocols.

Covered by governance consultants
4

Human Oversight Capacity

The psychological readiness to actually exercise oversight — to challenge AI outputs, escalate concerns, and intervene when necessary. The only layer that determines whether governance works in practice.

Addressed exclusively by HOCS
What sets HOCS apart

A governance diagnostic — not a maturity scan.

Most compliance diagnostics tell organisations what they already know: they are not fully mature. HOCS answers a different question entirely — one that determines whether oversight will actually work when it matters.

Conventional compliance diagnosticHOCS
Places the organisation on a maturity ladder (Developing → Leading)Identifies specific conditions present or absent in each team — no grades, no ladders
Generic recommendations: "improve governance", "invest in training"Precise interventions with named owners, due dates, and Article/clause mapping
One-off assessment with a PDF outputContinuous monitoring with a persistent, immutable audit trail
Measures structural artefacts (policies, controls, roles)Measures Human Oversight Capacity — the psychological readiness to exercise those structures
Ends the measurement exerciseStarts the governance action
Regulatory Anchors

Built for the EU AI Act — from Article 4 to Article 26

HOCS operationalises the human oversight requirements the EU AI Act establishes in law. Every dimension, output, and recommendation is mapped to the specific Article that makes it a legal obligation — not a best practice.

Article 4

AI Literacy

Organisations must ensure adequate AI literacy among all personnel involved in AI systems. HOCS provides the evidence layer — moving beyond training completion records to measurable psychological readiness.

Mandatory from February 2025
Article 14

Human Oversight

High-risk AI systems must be deployed with effective human oversight. HOCS assesses whether designated human overseers are genuinely equipped to act — not merely assigned to a role.

Aug 2026 current · Dec 2027 Digital Omnibus proposal
Article 26

Obligations of Deployers

Deployers must ensure personnel have the necessary competence, training, and authority to exercise oversight. HOCS operationalises this obligation — providing the diagnostic evidence that competence claims require.

Aug 2026 current · Dec 2027 Digital Omnibus proposal
For Whom

Designed for regulated EU enterprises

HOCS is built for organisations operating or deploying AI in high-risk contexts — those subject to the EU AI Act's human oversight requirements and seeking board-level evidence that their people are genuinely equipped to provide it.

Financial services

Banks, insurers, and asset managers deploying AI in credit, underwriting, and investment decisions.

Healthcare & life sciences

Providers and developers operating AI systems in clinical decision support and patient risk stratification.

Manufacturing & critical infrastructure

Operators using AI in safety-critical processes, predictive maintenance, and operational control.

Public sector & EU institutions

Government bodies and EU institutions deploying AI in administrative, regulatory, or enforcement contexts.

Any board seeking evidence

Any organisation that needs to demonstrate — not merely assert — that its AI oversight is human, not theoretical.

GRC and compliance teams

Teams responsible for EU AI Act readiness who need a practitioner-grade platform to close the human oversight gap.

How It Works

From diagnostic briefing to board-ready evidence.

HOCS is the diagnostic engine that grounds every Responsible AI Center engagement — ensuring every advisory recommendation is built on evidence, not assumption.

01
Diagnostic Briefing

45 minutes. Understand your governance context, key risks, and the oversight-critical roles to assess.

02
HOCS Assessment

The instrument is deployed to your oversight population. 60 items. 15–20 minutes. Five dimensions.

03
Readiness Report

A role-specific profile, regulatory gap analysis, and prioritised intervention agenda — mapped to specific Articles.

04
Continuous Monitoring

Quarterly pulse surveys track whether interventions are producing behavioural change — with an immutable audit trail.

HOCS
Human Oversight Capacity Standard

Start with a diagnostic briefing.

A 45-minute structured conversation to understand your governance context and whether HOCS is the right fit.

Request a Briefing
Read User Journey Stories →
HOCS — Human Oversight Capacity Standard

User Journey
Stories

How different roles experience HOCS — and what they walk away with.

Six user journeys — click to jump
1
Sandra
Chief Compliance Officer
Belgian financial institution
2
David
Chief People Officer (CHRO)
Healthcare — four hospital trusts
3
Lieselotte
Non-Executive Director
Belgian public sector agency
4
Koen
AI Product Owner
Belgian fintech
5
Anne-Sophie
Chief Risk Officer
Professional services firm
6
Pieter
Regional Director
Public health authority
Introduction

What these stories show

The six user journey stories that follow illustrate how the Human Oversight Capacity Standard — HOCS — works in practice across the roles most directly affected by EU AI Act compliance obligations. Each story follows a specific individual through their encounter with the platform: the problem they faced before deployment, what the diagnostic revealed, how HOCS guided them toward specific interventions, and what they were able to demonstrate as a result.

These are not maturity assessments. HOCS does not place organisations on a ladder or assign them a compliance grade. It identifies the specific conditions present or absent in each team, explains what those conditions mean operationally, recommends precise interventions with named owners and due dates, and maps every recommendation directly to the EU AI Act Article or ISO/IEC 42001:2023 clause it addresses.

The common thread across all six stories

In every case, the organisation had formal governance structures in place. What was missing was evidence that the people operating those structures were psychologically and behaviourally equipped to act when it mattered. HOCS makes that gap visible, specific, and fixable.

1
Chief Compliance Officer

Sandra

Mid-sized Belgian financial institution · March 2026
The situation

A problem she cannot solve with the tools she has

Article 4 has been enforceable for over a year. Her legal team has been clear: training completion records are not sufficient. The standard is demonstrable AI literacy, and nobody in the institution can currently produce that evidence. Her DORA assessment is due in four months. Her board has asked her to confirm that Article 14 and Article 26 obligations are demonstrably met.

She cannot confirm it. She has frameworks, policies, a training matrix, and 47 people with formal AI oversight roles. None of that tells her whether any of those people would actually challenge, escalate, or override the systems they are supposed to be watching if something went wrong.

What HOCS revealed

A governance vulnerability profile — not a maturity score

She deploys HOCS, inviting 120 people segmented by role, function, and AI usage level. Seventy-two hours later, what comes back is not a score that places the institution at a stage on a maturity scale. It is a governance vulnerability profile — a specific picture of where, in which teams and under which conditions, the oversight architecture is most likely to break.

Critical Engagement is significantly low across three business units. The AI Insights Engine generates a narrative interpretation: in those three units, staff report high awareness of AI risk but low behavioural frequency of actually documenting challenges or escalation decisions. The scoring split has detected an attitude–behaviour gap — the precise marker of paper governance.

How HOCS guided intervention

From pattern identification to governance redesign

The forensic engine surfaces a Diffused Passivity pattern in the credit risk function. HOCS maps this to Article 26 and FCA SM&CR personal accountability requirements and generates a specific recommended intervention — not a training course, but a governance redesign.

HOCS intervention
Credit risk function — Diffused Passivity pattern

Clarify which named individual holds formal challenge authority for the credit model. Build a documented override protocol. Establish a quarterly accountability review.

Owner: Head of Credit Risk · Due: 30 days · Article mapping: Art. 26, FCA SM&CR

Sandra also receives Article 4 compliance documentation: a timestamped, scored readiness profile across all 120 roles, structured to meet the regulatory standard. HOCS produced the evidence as a by-product of the diagnostic.

What Sandra got
Article 4 compliance evidence — timestamped, scored, and structured for regulatory submission
Specific identification of the Diffused Passivity pattern in the credit risk function
A governance redesign intervention with named owner, due date, and Article mapping
A continuous monitoring system that tracks whether the intervention is producing behavioural change
A board-ready quarterly report showing direction of travel — not a compliance grade
2
Chief People Officer (CHRO)

David

Healthcare organisation · Four hospital trusts across the UK and Belgium
The situation

A budget, an L&D team — but no diagnostic foundation

David has been asked to design an AI literacy programme for clinical and administrative staff. He has a budget. He has an L&D team. What he does not have is any evidence of what actually needs to change. Every AI literacy programme he has reviewed measures training completion. None of them tell him whether the training produces the capability it claims to.

What HOCS revealed

It was not a knowledge gap — it was a culture gap

He deploys HOCS before designing a single module. What comes back is a diagnostic map: a breakdown of which dimensions are strong, which are fragile, and why. Growth Orientation is not the problem — clinicians believe they can develop AI skills. The AI Insights Engine flags something more significant: Psychological Safety scores are substantially below threshold across three of the four trusts. Clinical staff are aware that AI recommendations can be wrong, but they do not feel safe raising concerns in formal settings.

How HOCS guided intervention

Two tracks, not one — because the gaps were different

HOCS tells David precisely: this is not a training gap. It is a leadership and culture gap. The recommended intervention is directed to the Chief Medical Officers of the three affected trusts.

HOCS intervention
Three affected trusts — Psychological Safety below threshold

Structured psychological safety interventions at team level, modelled by clinical leadership, with explicit permission to challenge AI recommendations as a professional norm rather than a deviation from protocol.

Owner: Chief Medical Officers (3 trusts) · Due: 90 days · Article mapping: Art. 14 — conditions to intervene

David designs two tracks. One addresses the knowledge gaps the Growth Orientation sub-scores identified — specific AI risk awareness modules differentiated by clinical role. The second is handed to the CMOs as a leadership development programme.

After implementation

Regulatory evidence built into the programme itself

The second HOCS cycle generates before-and-after evidence: which intervention moved which dimension in which trust. The Article 4 compliance PDFs from both cycles provide documented, timestamped evidence of literacy improvement.

What David got
A diagnostic foundation that revealed what to build before building it — saving budget from the wrong direction
Evidence that the gap was in Psychological Safety, not knowledge — a finding that changed the intervention design entirely
A precise intervention directed to CMO-level leadership rather than an L&D module
Before-and-after evidence of literacy improvement mapped to specific trusts and dimensions
Article 4 compliance documentation generated from the programme itself — not as a separate compliance exercise
3
Non-Executive Director (NED)

Lieselotte

Belgian public sector agency · AI systems in benefits administration
The situation

Assured, not reassured

Lieselotte chairs the audit committee of a Belgian public sector agency deploying AI in benefits administration. She has reviewed the governance framework. She has been assured that oversight is in place. She is not reassured. She remembers the Dutch childcare benefits case. That agency also had a governance framework.

What HOCS revealed

Lonely Vigilance — capable people silenced by culture

HOCS is deployed across the 85 people with formal AI oversight roles. The Psychological Safety finding is stark. The AI Insights Engine explains it plainly: staff in oversight roles are identifying concerns with the benefits allocation system regularly, but the organisational culture does not support raising those concerns through formal channels. The platform names this pattern — Lonely Vigilance — and maps it directly to Article 14: the conditions to intervene are absent, even though the competence may exist.

How HOCS guided intervention

An escalation channel audit and a pre-populated FRIA

HOCS intervention
Lonely Vigilance pattern — Psychological Safety below threshold

Conduct a formal escalation channel audit. Identify whether formal routes for oversight concerns are known, accessible, and psychologically safe to use. Commission a Director-General communication explicitly naming challenge as a professional obligation.

Owner: Director-General · Review: 60 days · Article mapping: Art. 14 — conditions to intervene

The FRIA module auto-populates from the dimension scores. Where gaps exist between what the FRIA requires and what the scores show, HOCS identifies them and generates recommended remediation steps with owners.

What the board received

Evidence — not periodic assurance from management

Lieselotte brings the HOCS output to the next board meeting. She presents a specific vulnerability, a recommended intervention, an assigned owner, and a monitoring timeline. The board adopts the HOCS quarterly pulse cycle as a standing governance review mechanism.

What Lieselotte got
Board-level visibility into the human governance layer that policy documents cannot provide
Specific identification of the Lonely Vigilance pattern — capable people silenced by an unsafe culture
A pre-populated Article 27 FRIA with gap-specific remediation steps and owners
A quarterly monitoring mechanism the audit committee can present to any regulatory inquiry
A defensible evidence trail — not periodic assurance from management
4
AI Product Owner

Koen

Belgian fintech · Automated credit decisioning model
The situation

Formal liability without formal protection

Koen has been named as the designated human oversight officer under Article 26. He has the title. He does not have the mechanisms to exercise what the title requires. No formal override protocol exists. No escalation pathway is defined. He holds formal liability without formal protection.

What HOCS revealed

The gap was structural — not personal

HOCS surfaces an attitude–behaviour gap in his Conscious Ownership dimension. His stated attitudes show high personal accountability — he genuinely feels responsible for the model's outcomes. But the behavioural frequency data shows something different: he almost never formally documents challenges to model outputs, and he has never used a defined escalation channel, because none exists. The AI Insights Engine interprets this directly: the gap is not Koen's willingness or capability. It is the absence of structural conditions through which his accountability can be exercised and evidenced.

How HOCS guided intervention

Three structural actions

HOCS interventions
Three structural actions

1. Override protocol: A documented process through which Koen's authority to challenge, pause, or override the model is defined, triggered, and logged.

2. Monthly challenge meeting: A standing governance calendar item with formal minutes and a standing agenda item for Koen to document concerns.

3. Escalation pathway: Map the escalation path to board level. Ensure Koen has used it in a dry run and is psychologically safe to use it under pressure.

Article mapping: Actions 1–2 → Art. 14. Action 3 → Art. 26 + FCA SM&CR personal accountability
Three months later

A protocol used, and an audit trail built

Koen has a protocol. He has used it four times. Each use is logged in HOCS's action plan tracking. The pulse survey shows his Conscious Ownership behavioural frequency score has moved significantly — not because his attitudes changed, but because the platform gave him the structural conditions to act on attitudes he already held.

What Koen got
A clear diagnosis that the gap was structural, not personal — protecting him from misplaced blame
Three specific interventions that gave his Article 26 title operational substance
An auditable record of override decisions that provides personal protection under SM&CR
A behavioural frequency improvement evidenced by pulse survey data over time
A governance mechanism that turns formal designation into enacted accountability
5
Chief Risk Officer

Anne-Sophie

Professional services firm · AI in legal research, contract review and regulatory analysis
The situation

A gap in the risk model

Anne-Sophie's risk model covers market risk, credit risk, and operational risk. It has a gap. The EU AI Act has created a compliance exposure in the human governance layer — the risk that AI-assisted work products cause harm because the people relying on them were not genuinely equipped to verify them — and she has no quantified, repeatable way to measure it.

What HOCS revealed

A measurable risk indicator — not a governance health rating

She deploys HOCS quarterly and integrates the dimension scores into her risk framework. The platform gives her specific, timestamped signals she treats as risk indicators in the same way she treats control failures in other risk categories. When Critical Engagement scores fall below a defined threshold in the regulatory analysis team, the AI Insights Engine identifies the likely cause — a recent increase in AI usage volume without a corresponding increase in structured review time — and generates a targeted intervention.

Intervention one

Responding to a Critical Engagement drop

HOCS intervention
Regulatory analysis team — Critical Engagement below threshold

Mandatory secondary review protocol for AI-assisted regulatory outputs, with documented sign-off by a named senior analyst before client delivery.

Owner: Head of Regulatory Practice · Due: 2 weeks · Article mapping: Art. 14 — competence to intervene
Intervention two

An early warning that prevented a client risk event

The forensic engine fires a Confident Blindness pattern alert in her innovation team six weeks before they are due to present AI-assisted market analysis to a major institutional client. The platform interprets this precisely: high Growth Orientation and Adaptive Flexibility scores combined with a Critical Engagement score that has dropped 12 points. The team is enthusiastic about the AI tools and increasingly relying on outputs without systematic verification.

HOCS recommends a structured output audit session before client delivery. Anne-Sophie runs the session. The intervention is documented in the action plan log. The risk event does not occur.

What Anne-Sophie got
A quantified, repeatable human governance risk metric she can integrate into the firm's risk model
An early warning system for the Confident Blindness pattern — high enthusiasm combined with low scrutiny
A documented intervention that prevented a specific client risk event from occurring
Evidence that the risk function identified, acted on, and resolved a human governance risk proactively
A compliance posture that goes beyond technical controls into the human layer the EU AI Act requires
6
Regional Director

Pieter

Public health authority · Patient triage and resource allocation AI
The situation

Six weeks, €40,000 — and no spare capacity

Pieter is an operations leader, not a compliance expert. His problem arrived as a legal counsel memo. His organisation needs to complete a Fundamental Rights Impact Assessment for its triage AI system before August 2026. He was quoted six weeks and €40,000 by an external consultancy. He does not have six weeks. The budget is not there. He has patients to move through a health system and no spare capacity for a multi-month compliance project.

What HOCS revealed and how it guided completion

A pre-populated FRIA — in hours, not weeks

His organisation deploys HOCS at the Enterprise tier across the clinical staff and operational managers who work with the triage system. Three hours after the assessment closes, the FRIA module auto-populates. What Pieter reviews is a structured six-step Fundamental Rights Impact Assessment with the human oversight sections pre-filled from HOCS's dimension data — translated directly into evidence for the question every national competent authority will ask: are the humans responsible for overseeing this system actually equipped to do so?

Where the FRIA requires remediation, HOCS is specific. Two gaps are identified: Critical Engagement is below threshold in the night-shift triage team, and several clinical leads are unclear on their specific override authority.

HOCS FRIA remediation
Two identified gaps

Gap 1 — Night-shift Critical Engagement: Structured AI output verification protocol for night-shift triage staff. Lead: Night Shift Clinical Supervisor. Timeline: 4 weeks.

Gap 2 — Override authority clarity: Formal override authority matrix issued to all clinical leads, with a documented dry run of the override process. Lead: Clinical Governance Officer. Timeline: 3 weeks.

FRIA mapping: Both gaps → Section 4 (Human Oversight Evidence) · Art. 14 + Art. 27
The outcome

One assessment, three hours of review, a submitted FRIA

Pieter assigns the interventions. He submits the FRIA to the national competent authority. He did not need six weeks. He did not need €40,000. He needed one HOCS Enterprise assessment and three hours of review time. Six months later, when a new patient cohort changes the deployment context, the FRIA module regenerates automatically from the updated pulse survey scores. The evidence trail is continuous, automatic, and auditable.

What Pieter got
A mandatory Article 27 FRIA completed in hours rather than the six weeks and €40,000 quoted
Two human oversight gaps with remediation actions, owners, and FRIA section mapping
A regulatory submission that answers the competent authority's oversight question with practitioner-diagnostic evidence
A living compliance instrument that regenerates automatically when deployment context changes
A budget-efficient compliance posture that frees operational capacity for patient care
About HOCS

Human Oversight Capacity Standard

HOCS is an AI GRC RegTech SaaS platform developed by The Responsible AI Center to continuously measure and monitor Human Oversight Capacity — the psychological readiness of individuals and teams to challenge, escalate, and intervene in AI-enabled decisions. Built for regulated enterprises operating or deploying high-risk AI systems under the EU AI Act.

The instrument assesses five dimensions: Psychological Safety, Critical Engagement, and Conscious Ownership (structural mediators) plus Growth Orientation and Adaptive Flexibility (developmental enablers). Every output is mapped directly to the specific EU AI Act Article or ISO standard that makes it legally necessary.

This website uses essential cookies to ensure functionality and privacy-respecting analytics to understand how the site is used. We do not use advertising cookies or third-party tracking. Read our Privacy Policy and Cookie Policy for details.