Innovative Techniques for AI Fairness

In our continuous effort to enhance fairness and accuracy in AI systems, the AEQUITAS project has pioneered a suite of innovative techniques designed to detected and mitigate bias. These advanced techniques are embedded directly within our experimentation tool, providing users with powerful, accessible means to conduct experiments and refine processes. The list below details the cutting-edge techniques we have developed, each tailored to address specific challenges in bias detection and correction.

AI Bias Detection & Awareness

This section outlines our key contributions to bias detection and awareness, featuring cutting-edge research and methodologies that address the complex challenges posed by socio-technical decision-making environments. New tools for detection have been proposed as well as new metrics for bias evaluation. Each entry below details a significant advancement in the field, designed to refine how we assess, understand, and correct biases in AI applications.

AI Bias Mitigation

The AEQUITAS project has led to the development of several innovative techniques aimed at mitigating AI bias and ensuring fairness in various domains. These advancements focus on incorporating fairness into the very structure of AI systems, ensuring not only immediate fairness but also long-term stability and adaptability.

AI Fairness: Methodology and Formal Methods

In the AEQUITAS project, a key focus has been to develop comprehensive methodologies and formal methods for assessing and enforcing fairness throughout the AI lifecycle. The main contribution are the following.

  • Assessing and Enforcing Fairness in the AI Lifecycle: This survey organizes the current state of research on fairness concepts and related bias-mitigation techniques across the AI lifecycle, and also highlights the gaps and challenges identified during its development. This set the foundation for the AEQUITAS methodology as well as the state of the art techniques included in the experimentation environment.

  • A geometric framework for fairness: The paper presents the GEOmetric Framework for Fairness (GEOFFair), which provides an intuitive and rigorous approach to understanding fairness in machine learning. By representing fairness-related elements as vectors and sets, GEOFFair allows for visualizing mitigation techniques, constructing proofs, and exploring fairness properties like distances between fairness vectors and trade-offs between metrics.

Benchmarks and Synthetic Data Generation: different bias polarization

The AEQUITAS project has also made significant advances in the creation of benchmarks and the generation of synthetic data to explore and evaluate bias polarization in AI systems. These innovations are crucial for assessing the impact of various biases and ensuring that mitigation techniques perform effectively under different scenarios.

  • Generation of Clinical Skin Images with Pathology with Scarce Data: This research presents a Machine Learning (ML) technique to generate synthetic, realistic skin images for dermatology, addressing the challenge of limited training data for disease detection. By using just a few samples, the approach augments datasets and improves image classification tasks, demonstrated with data from Use case HC1.