Quality of AI-Based Systems

It is widely assumed that computers, including AI-based systems, are more accurate than humans. In the decision-making process, these systems follow objective standards, enabling decisions that are fact-based and emotion-free. However, since AI technologies rely on probabilistic models, failures may occur. To trust such decisions, rigorous testing is essential to mitigate a wide range of risks (such as bias, data leakage, and sustainability, see ‘Quality Characteristics for AI and GenAI’).


The quality of an AI-based system is determined by various factors that relate to its ethical and effective use. These factors create a framework that supports the development and deployment of high-quality AI systems. Key factors include:


  • Transparency: AI systems must be understandable to all users, allowing them to comprehend how decisions are made.
  • Inclusion: Ensuring that these systems do not discriminate against anyone, recognizing the equal dignity of every human being.
  • Accountability: There must always be someone who takes responsibility for the actions of the machine.
  • Impartiality: Avoiding the creation or perpetuation of biases.
  • Reliability: AI must consistently perform well under different conditions.
  • Security and Privacy: AI systems must be secure and respect the privacy of users and other people.

 

Some of those factors relate to quality characteristics of IT systems that are described in ‘Quality Characteristics for AI and GenAI’. Other factors relate to the way IT systems are created and provided. These are described in ‘The EU AI Act and its impact on Quality Engineering’.