Volg ICTI

HCAI-ep 2026: Human-Centered AI in Practice

| Pavlo Burda | Artificial Intelligence Research Software
Presenting at HCAI 2026

The Human-Centered AI Education & Practice (HCAI-ep) conference focuses on how to design and evaluate AI systems that remain aligned with human values and real-world constraints. This is exactly where many organisations struggle today, especially in light of emerging regulation such as the EU AI Act. At HCAI-ep, we presented our work on fairness requirements in AI systems and discovered several contributions that are particularly relevant for practice and research.

HCAI-ep 2026 brings together researchers, educators and practitioners working with AI from different angles to Maynooth, Ireland. What makes the conference stand out is its emphasis on concrete tools, methods, and applications, where respect for human values is treated as a starting point for design rather than an afterthought added at the end.

Designing Fair AI Systems in Practice

User Fairness Preferences

At HCAI, I presented our paper “Fairness Trade-offs in Hiring: What People Prefer and What Engineers Can Build”, which addresses fairness in algorithmic hiring. Organisations deploying AI systems to support decision-making in high-risk contexts, such as recruitment, must comply with regulatory requirements under the EU AI Act. This includes conducting a Fundamental Rights Impact Assessment (FRIA), where discrimination risks and mitigation measures must be explicitly assessed.

One key part of a FRIA is the evaluation of fairness: whether individuals affected by the system are treated impartially and without unjustified discrimination. While fairness is often discussed abstractly, choosing how to measure it in practice is a major challenge. In our paper, we introduce a requirements elicitation method that helps teams decide which fairness metrics fit their context and stakeholder values. We evaluated this approach with more than 50 participants in a realistic hiring scenario and provide a guideline that practitioners can use to translate fairness discussions into concrete, testable software requirements. This work builds directly on our earlier explainer on fairness metrics and our public Kaggle dataset.

HCAI-ep resources worth checking out

A guided ethics and AI Act compliance assessment tool for early AI prototypes. This contribution introduces a web-based tool that guides developers and students through structured questions to assess whether an AI prototype may conflict with requirements of the EU AI Act. The output is a tailored report with prioritised gaps, which can serve as lightweight documentation to support early compliance discussions and design decisions.

An agentic AI framework to automate scientific literature review. The provided software is a multi-agent tool with RAG capabilities to automate the grindy parts of literature exploration: it automatically looks up relevant papers, filters abstracts, evaluates the full-text PDFs and triages the gaps in current state-of-the-art. It’s a practical tool that you can deploy locally to speed up the literature search, especially valuable for users not familiar with research practice.

A framework for evaluating bias in AI agents and agentic workflows. This work presents a repository for evaluating bias in single- and multi-agent AI workflows, including systems that rely on Model Context Protocols (MCPs). Because large language models are inherently non-deterministic, repeated or chained interactions introduce variation that appears as systematic bias in domains such as hiring, content moderation, or healthcare. The proposed framework provides a structured way to measure and analyse such effects across agentic workflows, offering practical support for teams experimenting with these systems.

The HCAI community’s contributions represent a small but important step toward aligning AI systems with human values by translating abstract principles into concrete methods, tools, and requirements. Our paper is freely available via ACM, alongside the accompanying Kaggle dataset and the print-ready requirement elicitation guideline.

Author: Pavlo Burda
Dr. Pavlo Burda is an IT consultant and researcher specializing in emerging cybersecurity threats and people analytics for security.