The Responsible AI Center works with a small number of academic, institutional, and practitioner partners whose work is directly relevant to the governance challenges we address.
We collaborate with academic researchers, governance specialists, and regulatory practitioners whose expertise directly strengthens the diagnostic and advisory work we deliver. Every partnership has a clear intellectual or practical purpose.
We value collaborations that have a clear intellectual or practical purpose. We look for partners whose expertise genuinely strengthens the diagnostic and advisory work we deliver — because the best partnerships are built on shared commitment, not appearances.
We believe collaboration works best when it is free from commercial influence. We do not accept referral fees or endorse products, and we ensure that no partnership shapes our governance diagnostics or advisory positions. Our independence is what makes our work trustworthy.
Responsible AI is not a label — it is a practice. Every collaboration we enter reflects the same standard of rigour, transparency, and ethical commitment we bring to our own governance work. If a partnership cannot meet that standard, it does not proceed.
Our governance diagnostics are informed by joint research into the psychological and organisational conditions required for responsible AI oversight. This work bridges academic inquiry and boardroom practice.
Dr Michalska is a strategic advisor, researcher, and former senior executive with international leadership experience in global financial services. She has held senior roles across enterprise risk, governance, and large-scale transformation within highly regulated institutions including JPMorgan and HSBC, advising boards, executive committees, and regulators.
Collaborative research examining the psychological and organisational factors that determine whether human oversight of AI systems is genuine or ceremonial. This programme produces the empirical foundation for our governance diagnostic methodology.
Our research findings are published through academic channels and practitioner-focused publications.
Regular presentations at European AI governance conferences, sharing research findings and practical frameworks with the governance community.
Articles and analysis written for governance professionals, translating academic research into actionable guidance for regulated organisations.
Contributing to the development of regulatory guidance and codes of practice through consultation processes and expert input.
We maintain working relationships with a select group of GRC practitioners, legal advisors, and HR transformation specialists across Belgium and the wider EU. When our diagnostic identifies needs that fall outside our core expertise, we connect clients with the right specialists.

ethicagroup.aiEthica Group provides independent board-level advisory on structural accountability and decision architecture in organisations where execution increasingly relies on automated and AI-enabled systems. Their work complements our governance diagnostics by addressing the structural conditions — authority allocation, escalation integrity, and governance alignment — that determine whether human oversight can function in practice.
Design of decision rights, delegation, and escalation within organisational systems.
Allocation of decision rights and override capacity aligned to strategic direction.
Structural and behavioural conditions required for effective executive intervention.
Leadership capability to exercise authority under scale, speed, and system complexity.
Specialist legal counsel on EU AI Act compliance, regulatory interpretation, and enforcement preparation. We work with legal advisors who understand both the letter and the spirit of the regulation.
When our diagnostic reveals that governance gaps are rooted in organisational design, role clarity, or capability development — we connect clients with HR transformation specialists who can design and deliver the interventions.
For organisations that need Layer 1 and Layer 2 support — model monitoring, data governance, technical documentation — we maintain relationships with technical governance specialists who complement our Layer 4 focus.
Working with internal audit, risk management, and compliance teams to ensure that human oversight findings are integrated into existing GRC frameworks and reporting structures.
Supporting boards and executive committees in understanding their AI governance obligations and the human oversight implications of the EU AI Act for their specific operating context.
When our AI Literacy Architecture service identifies specific training needs, we connect clients with L&D specialists who can build and deliver the role-differentiated programmes we have designed.
We are always open to conversations with researchers, practitioners, and institutions whose work intersects with ours. If you are working on the human dimension of AI governance, we would like to hear from you.
Joint research programmes, co-authored publications, and doctoral supervision in AI governance, human oversight, and the psychology of AI-augmented decision-making.
Presentations, panel discussions, and keynotes at governance, AI ethics, and regulatory conferences across Europe. We speak from evidence, not opinion.
Contributing to the development of codes of practice, regulatory guidance, and standards through formal consultation processes and expert advisory roles.
Working relationships with GRC specialists, legal advisors, and HR professionals whose expertise complements our governance diagnostic and advisory work.
We welcome conversations with researchers, practitioners, and institutions whose work aligns with ours. Reach out to explore how we might work together.
[email protected]