Find your template, checklist or other download to help you in you tasks.
Find your way to be trained or even get certified in TMAP.
Start typing keywords to search the site. Press enter to submit.
The EU AI Act [EU 2023] is a regulatory milestone, enacted in 2024. This legislation is the world’s first comprehensive attempt to govern AI systems across their entire lifecycle. Its message is unambiguous: innovation must be coupled with responsibility.
The Act offers more than compliance mandates for IT teams; it provides a blueprint for building safe, explainable, and human-aligned AI systems. In this chapter, we explore what the EU AI Act means for software teams working with GenAI, both as developers and users, and how quality engineering can be the lever for sustainable compliance.
The AI Act is built on six core principles that align closely with modern quality values and ethical principles.The following excerpt is a direct citation from the official European legislation:
Human agency and oversight means that AI systems are developed and used as a tool that serves people, respects human dignity and personal autonomy, and that is functioning in a way that can be appropriately controlled and overseen by humans.
Technical robustness and safety means that AI systems are developed and used in a way that allows robustness in the case of problems and resilience against attempts to alter the use or performance of the AI system so as to allow unlawful use by third parties, and minimize unintended harm.
Privacy and data governance means that AI systems are developed and used in accordance with privacy and data protection rules, while processing data that meets high standards in terms of quality and integrity.
Transparency means that AI systems are developed and used in a way that allows appropriate traceability and explainability, while making humans aware that they communicate or interact with an AI system, as well as duly informing deployers of the capabilities and limitations of that AI system and affected persons about their rights.
Diversity, non-discrimination and fairness means that AI systems are developed and used in a way that includes diverse actors and promotes equal access, gender equality and cultural diversity, while avoiding discriminatory impacts and unfair biases that are prohibited by Union or national law.
Social and environmental well-being means that AI systems are developed and used in a sustainable and environmentally friendly manner as well as in a way to benefit all human beings, while monitoring and assessing the long-term impacts on the individual, society and democracy.
These principles echo what quality engineers already pursue: systems that are robust, fair, and built with users in mind.When initiating a quality and test policy, drawing up generic test agreements and creating a quality engineering strategy, these principles are core guidelines to ensure AI-based systems comply to European legislation.
Of course, the EU AI Act is much broader than these principles, for example it elaborates on situations where the use of AI is prohibited (such as social credit scoring), AI applications that must comply with mandatory requirements (such as use in medical devices or vehicles) and general purpose AI-based systems.Various standardization and regulatory bodies are currently creating supporting guidelines, of which we describe an example in the following section.
An essential aspect of the ethical AI use is transparency. Transparency delivers significant economic and societal value by fostering social trust, enabling effective coordination through the reliable exchange of intentions and status, and empowering consumers to make informed choices. Clarifying the attributes of products and services incentivizes ethical behavior among providers and supports more accurate, data-driven decision-making.
To support this principle, the IEEE introduced a new standard in 2024 [IEEE 2024] that defines five distinct levels of AI involvement. This standard is called the “IEEE Standard for Transparent Human and Machine Agency Identification” and is known under the identification IEEE Std 3152™-2024.
This standard specifies visual and audio marks for transparently labeling communications in contexts prone to confusion regarding the essential characteristics of an interlocutor (human and/or AI) and associated media.The following section outlines these five levels of AI involvement and their corresponding symbols.
AI systems will provide a simple, understandable way for the user to understand what the system is doing, why, and how the system is doing what it is doing. For this, there are five classes of marks, each with a visual and audio component, as well as an associated standard description. They are as follows:
The EU AI Act uses a risk-based model to determine compliance obligations. This approach enables regulators to focus on high-impact AI while avoiding overregulating low-risk tools.
Understanding where an AI application falls in this pyramid is foundational in aligning development and QE efforts with the law.
The Act includes a phased rollout:
For software teams, these dates aren’t just regulatory markers but critical inputs to your development roadmap.
If you’re developing AI-powered software, QE is your ally in achieving compliance. Here’s how:
Risk Classification: Begin by assessing your AI’s intended use and classifying its risk level. Even if it seems simple on the surface, a chatbot assisting with hiring decisions could be high-risk.Design for Oversight: Include the ability for humans to review, override, or halt AI decisions. Human-in-the-loop design is not optional; it’s legally required.Bias Mitigation and Data Governance: Audit your training data,document data sources, preprocessing steps, and validation protocols, and address imbalances before they become liabilities.Traceability and Logging: Maintain logs that capture how AImakes decisions. This ensures transparency and supports post deployment auditing.Documentation: Prepare a technical dossier early. This dossiershould include your model’s purpose, design choices, risk assessments, and testing results. Think of it as the AI equivalent of a product safety file.
Monitoring Post-Deployment: Track model behavior in production. Create alert mechanisms for deviations or failures. Serious incidents must be reported to EU authorities.
These steps aren’t just legal formalities; they also strengthen your product’s quality, reliability, and trustworthiness.
What if your team simply uses GenAI tools like code assistants, testing optimizers, or deployment predictors?The good news: most of these fall under minimal or limited risk. You’re not obligated to certify their use but are still accountable for their outputs.
Human Oversight Remains Critical: Don’t treat GenAI outputs as infallible. Code snippets, test cases, or UI suggestions from AI tools should be reviewed, tested, and version controlled.Check Vendor Compliance: From 2025 onward, many AI tool vendors are required to comply with the Act’s transparency and content labeling rules. Choose tools from providers who are preparing for this. This will reduce your downstream risk.Beware of the Deployment Shift: If you integrate AI into your product (e.g., a recommendation engine or decision module), you become a provider or deployer. This shifts your responsibilities. You’ll need to classify the AI feature and possibly meet transparency or even high-risk requirements.Stay AI Literate: The Act emphasizes AI literacy. Teams must understand the strengths, weaknesses, and ethical implications of the tools they use. Participating in training or governance programs is not just helpful, it’s likely to become expected.
At first glance, the EU AI Act might seem like a compliance burden. But teams that embrace its principles early gain an edge. Here’s why:
Market Readiness: AI systems aligned with the Act are futureprooffor the EU market, and likely for other regions adopting similar standards.Trust by Design: Transparency, safety, and oversight are not just regulatory ideals—they’re user expectations. Building these intoyour systems elevates brand trust.Improved Development Rigor: Risk assessments, structured documentation, and real-time monitoring improve overall softwarequality.Innovation Through Constraint: Regulatory sandboxes allow for safe experimentation. Early adopters can shape norms and refine product-market fit in regulated environments.
While the EU AI Act now serves as the binding legal framework, the 2017 Civil Law Rules on Robotics [EU 2017] continue to offer foundational ethical and engineering guidance. Though nonbinding, these rules influenced the AI Act’s design, particularly in areas like human oversight, transparency, and traceability. For QE professionals, the Robotics Rules bring additional relevance: they call for design principles like explainability, embedded traceability, and even a “black box” requirement for robots, early notions of software audit trails that now align with AI Act logging mandates. Moreover, the Robotics Rules highlighted the need for civil liability reform, complementing the AI Act’s ex-ante risk focus with a civil law perspective on post-harm accountability. This duality matters in QE: engineers must not only prevent harm through rigorous testing and risk assessment but also enable systems that can be audited and attributed when failures occur. By internalizing both the aspirational ethics of the Robotics Rules and the enforceable standards of the AI Act, quality engineering teams can design AI systems that are robust, responsible, and resilient.
The EU AI Act represents a new chapter in software development where compliance, quality, and innovation must converge. For quality engineers, this is not a challenge to overcome but a framework to amplify your impact.
By applying the Act’s principles throughout the software lifecycle, teams can build AI that is not only legally compliant but also ethically sound, technically robust, and socially beneficial.
In a world increasingly shaped by GenAI, trust is the product. And quality engineering is how we build it.
Overview