Testing AI with AI, Keep an Expert in the Loop

Testing AI with AI is a special case of using artificial intelligence. It is about using AI to test systems that are based on (generative or other) artificial intelligence.

This may be a very appealing and sometimes feasible possibility. However valid this option may seem, before deciding to do so, the people involved must very carefully weigh the fact that they will test a system of which they don’t exactly know what it does, by using another system of which they don’t exactly know what it does. So, all in all this is piling up uncertainties. On the other hand, you may argue that in our modern systems of systems, we have long ago become used to trusting systems we don’t fully understand.

Managing Quality Risks

The key is in mitigating the risks. This is done by keeping an expert in the loop. An expert that knows what the system under test is supposed to do, and who understands how artificial intelligence can be applied to test such system, and who is able to recognize whether the quality level that is reported actually reflects the quality level that a human will perceive.
The final judgment about confidence that the pursued business value will be achieved still has to be made by a human being, because the accountability for business processes can’t be delegated to a machine, no matter how intelligent it may seem.

Apply Critical Thinking

One of the risks associated with using AI is that people don’t apply critical thinking when assessing the results of an AI based solution. People are quickly misled by the nice-looking outputs or may even lack the knowledge needed (and may also lack curiosity) to judge the validity of output. This is known as the “automation bias” of people which means that if a result is produced by an automated system, people tend to trust it highly.
One of the results of testing an AI-based solution may be to create a usage guideline that describes in which situations the solution can be used and in which situations it is not wise (or sometimes even illegal) to use the solution.
This can help raise the awareness of the people involved about the risks an AI-based solution brings.

Read more about ‘Critical Thinking‘.