Summary
A Bulgarian technology company is developing a modular software toolkit supporting compliance of high-risk AI systems with the requirements of the European AI Act. The toolkit will include tools for bias detection, mitigation of over-reliance on AI outputs, and training of human overseers responsible for AI monitoring. The company seeks partners from high-risk application sectors (e.g. healthcare, finance, security, HR, infrastructure) and IT companies to jointly develop and pilot the solution.
Description
FULL DESCRIPTION
A Bulgarian company is exploring the development of a software toolkit that supports compliance of high-risk AI systems with the requirements of the European AI Act.
The AI Act introduces strict obligations for AI systems that may affect fundamental rights, safety, or wellbeing of citizens. These high-risk systems span sectors such as healthcare, finance, employment services, infrastructure, security, transportation and other areas where automated decision-making could have significant societal impact.
Organisations developing or deploying such systems are required to implement safeguards including bias monitoring, human oversight, transparency and risk management. However, practical tools enabling companies to operationalise these compliance requirements are still largely unavailable, as the regulatory framework is new and most regulatory sandboxes are not yet operational.
To address this gap, the Bulgarian company proposes the development of a modular AI compliance toolkit supporting organisations in implementing technical and organisational safeguards required by the AI Act.
The planned toolkit will include several components:
- AI Bias Detection and Mitigation Tool
Supports compliance with Article 14(2) of the AI Act by enabling systematic testing of AI outputs through modification of key dataset attributes. This allows identification and mitigation of potential biases that may affect fairness or decision outcomes.
- AI Over-Reliance Mitigation Tool
Addresses risks related to excessive reliance on AI outputs, as described in Article 14(4)(b). The tool introduces deliberately incorrect outputs in controlled scenarios in order to ensure that human overseers remain vigilant and critically evaluate AI recommendations.
- AI Training and Confidence-Building Tool
Supports compliance with Article 14(4)(c) and (d) by providing training tools that help human operators better understand AI limitations, detect inconsistencies, and intervene when necessary.
The project is currently at concept stage, and no comparable commercial solutions addressing AI Act compliance requirements are widely available on the market.
Advantages and Innovations
The proposed solution aims to address a new regulatory and technological challenge introduced by the AI Act. The innovation lies in the development of a modular toolkit that can be integrated into different AI systems and sectors.
Key advantages include:
- early development of practical tools supporting AI Act compliance
- modular architecture adaptable to different high-risk sectors
- support for human oversight and responsible AI deployment
- tools for bias testing, operator training and human-AI interaction monitoring
Advantages and Innovations
The proposed solution aims to address a new regulatory and technological challenge introduced by the AI Act. The innovation lies in the development of a modular toolkit that can be integrated into different AI systems and sectors.
Key advantages include:
- early development of practical tools supporting AI Act compliance
- modular architecture adaptable to different high-risk sectors
- support for human oversight and responsible AI deployment
- tools for bias testing, operator training and human-AI interaction monitoring
Technical Specification or Expertise Sought
Type of partner
SMEs or companies operating in high-risk sectors (e.g. healthcare, education, finance, security, human services, infrastructure) or IT companies
Role of the partner
Partners are expected to participate in joint R&D activities, contribute to software development, and support testing and validation of the toolkit in real operational environments.
Should you have any questions, feel free to contact Mar Coromina mcoromina@secartys.org | 623 35 59 80, and we'll do our best to help you.