Sofia, Oct 17 (BTA/GNA) – The Institute for Computer Science, Artificial Intelligence, and Technology (INSAIT) at Sofia University St. Kliment Ohridski, the Swiss Federal Institute of Technology ETH Zurich, and the company LatticeFlow AI announced the release of the first evaluation framework of the European Union Artificial Intelligence Act for generative models, the Ministry of Education and Science said here on Wednesday.
The European Union now has a tool to evaluate the reliability of generative artificial intelligence. This tool is publicly accessible and is the product of a collaboration between the Bulgarian institute INSAIT, ETH Zurich and LatticeFlow AI. The European Commission officially recognizes it as a first step towards improving the practical implementation of regulations in this area.
The EU Artificial Intelligence Act, a groundbreaking regulation in the field, came into effect in August 2024. While the Act establishes expert-level regulatory requirements, it lacks detailed technical guidelines for companies to implement.
To address this, ETH Zurich, INSAIT, and LatticeFlow AI have developed the first technical interpretation of the Act’s six core principles. This compliance framework provides a practical methodology, translating the regulatory framework into specific technical requirements, enabling companies to evaluate AI models according to the standards set by the legislation.
This framework effectively introduces the first tool at both European and global levels that directly connects regulatory requirements to their practical application. While previous efforts have focused on outlining broad regulations, this marks a significant step toward the comprehensive regulation of AI worldwide.
The Bulgarian institute’s key role in the project has already earned recognition at the European level. EU’s AI Office endorsed the project’s implementation, acknowledging it as a foundational step for creating broader AI regulations.
Thomas Regnier, the European Commission’s spokesperson for digital economy, research, and innovation, commented on the release: “The European Commission welcomes this study and AI model evaluation platform as a first step in translating the EU AI Act into technical requirements, helping AI model providers implement the AI Act.”
A new assessment tool for large language models (LLMs) has been developed, based on the technical interpretation of the six core principles of European law. Now publicly accessible at https://compl-ai.org, this tool evaluates the compliance of popular AI models from companies like OpenAI, Meta, Alibaba, and Anthropic with European regulatory standards.
In addition, any company can use this platform to assess how well its own models align with the requirements of the EU Artificial Intelligence Act.
“We invite AI researchers, developers, and regulators to join us in advancing this evolving project,” said Prof. Martin Vechev, Full Professor at ETH Zurich and Founder and Scientific Director of INSAIT in Sofia.
“We encourage other research groups and practitioners to contribute by refining the AI Act mapping, adding new benchmarks, and expanding this open-source framework. The methodology can also be extended to evaluate AI models against future regulatory acts beyond the EU AI Act, making it a valuable tool for organizations working across different jurisdictions.”
GNA/BTA