Moving from Testing at the End to Forecasting Quality

Half a century ago, the view on quality of computer software was still simple, just find the errors… In 1979, Glenford Myers defined testing as “the process of executing a program with the intent of finding errors” in his book The Art of Software Testing [Myers 1979]. The idea behind it was that after the existing faults
were fixed there would be perfect software. In his book Perfect Software and Other Illusions About Testing [Weinberg 2008], Jerry Weinberg articulated a truth long recognized by IT practitioners:
in today’s landscape of complex, interconnected information systems, perfection is unattainable. There will always be residual risks. The objective, therefore, is not to eliminate all faults, but to achieve the right level of quality while mitigating the important quality risks. This pragmatic view is reflected in the TMAP definition of testing (see introduction to quality engineering and testing).


Modern software engineering no longer relies on late-stage testing as a safety net to ensure quality. Instead, quality is proactively built in from the very beginning followed by smart testing where necessary. By using insights from processes and interim results, based on prescriptive analytics AI can forecast quality levels early and intervene proactively when it falls short, achieving the right quality level at the right time.

Evolution from testing at the end to quality forecasting.