Find your template, checklist or other download to help you in you tasks.
Find your way to be trained or even get certified in TMAP.
Start typing keywords to search the site. Press enter to submit.
In the TMAP book Quality for DevOps teams [Marselis 2020], we introduced the TMAP quality engineering topics, also known as QE & Testing topics.
A topic is a set of generic activities for a specific theme that are always relevant for quality engineering, regardless of the applied IT delivery model.
TMAP describes 20 topics in two groups:
The TMAP topics relate to all three categories of quality measures (build quality in, provide information about quality, and improve quality), with particular emphasis on the provide information category. It is important to note that quality engineering covers a wider range of activities than just what is explicitly documented in TMAP. Other approaches, methodologies, and standards must be applied in combination with TMAP.
In this building block, you will find examples of how GenAI can support the activities of these topics. Keep in mind that these are only some examples, many more possible uses of GenAI will be implemented. The main goal of these examples is to inspire you to explore the use of GenAI for any quality engineering activity.
For some of the topics it is obvious that GenAI can provide support. Examples are test design (to generate test cases) and test automation (to generate test automation scripts). When you look into the less obvious topics, you will quickly notice that all topics can, in one way or another, be supported by GenAI tooling.
We will mention at least one possible use of GenAI per topic. Keep in mind that many more possibilities already exist and even more will emerge in the near future.
This figure shows the quality engineering topics. The sections give examples of possible ways to use GenAI, or other manifestationsof AI.
In many cases, AI can assist in this review by generating test summaries, highlighting redundant patterns, or clustering similar tests. But final decisions still require human interpretation.
For documenting a Quality & Test Policy, nine different subjects have to be elaborated. GenAI can support in analyzing the mission and vision of the organization and, based on those, generate the description for the subjects of the policy.
GenAI can advise regarding balanced roles and responsibilities within a team and identify missing roles or responsibilities. Based on team skills and project scope, AI can suggest a balanced role distribution to increase team effectiveness.
GenAI can analyze data resulting from monitoring activities. Based on predictive analysis (also known as quality forecasting), it can trigger control actions automatically, for example to mitigate risks before they turn into failures, or to optimize performance.
GenAI can also generate monitoring scripts and support in implementing monitoring processes and tools, contributing to faster and less time-consuming monitoring mechanisms.
GenAI can support Root Cause Analysis by automatically classifying and prioritizing anomalies, and (when available) analyze documentation to suggest possible root causes. It can also analyze large numbers of anomalies to identify patterns and recurring root causes. This accelerates problem-solving and supports continuous improvement of the IT delivery process.
GenAI can generate reports, charts, real-time dashboards, and even do story-telling about the quality and risks. For those, AI summarizes relevant measurement data, and when specific thresholds are exceeded, create automatic alerts to trigger control actions.
This way GenAI supports real-time insights into the quality of products, processes and people involved in the IT delivery process.
GenAI can compare a new story with a large set of reference stories to find the closest match and generate estimates based on historical data, taking various parameters into consideration. This way, it can propose an estimate to the team, which can speed up the estimation process.
GenAI can help optimize planning of resources and scheduling tasks, by analyzing dependencies between user stories, and dependencies on required resources, while applying priorities and critical path analysis.
GenAI can optimize the use of infrastructure, based on historical and current data about the project. It can also support in generating infrastructure-as-code scripts or analyze existing scripts to suggest improvements.Also, GenAI can analyze the actual use of infrastructure and – based on prescriptive analytics – actively adjust the infrastructure settings when the performance of the infrastructure exceeds specified thresholds.
GenAI can offer support in selecting tools by leveraging knowledge of the existing tool stack, other tools that are available on the market, and the project’s specific needs.Also, AI can support (and largely automate) the deployment and maintenance of tools and frameworks.
GenAI can provide guidance on selecting appropriate metrics, aligned with the improvement goals set by users or other stakeholders.Subsequently, it can perform the relevant measurements to gather input for continuous improvement.
Using the gathered measurements GenAI can guide improving both effectiveness and efficiency of the IT delivery process.Also, GenAI can assist in analyzing the effects of improvement activities. Based on this analysis, it can support decisions on whether to continue, adapt, or enhance improvement initiatives.
GenAI can generate an overview of requirements to serve as the starting point for risk analysis. It can also support in starting up the test strategy with choices for relevant test design techniques and test approaches.
GenAI can analyze transcripts of online meetings and generate acceptance criteria for the user stories or features. Additionally, it can provide advice on the completeness and correctness of acceptance criteria, for the total set of user stories taking the definition of done in consideration.
TMAP distinguishes three groups of quality measures: to build the right quality from the start, to provide information about the quality level and to improve the quality of deliverables. GenAI can advise on which combination of quality measures should be applied in the situation at hand.
It can also support the application of specific quality measures of all three categories, such as co-creating requirements, generate synthetic test data or suggest refactoring options.
GenAI can analyze texts such as requirements, user stories, or program code, and provide automated feedback to enhance their quality. Also, GenAI may be used as one of the team members in a 4 amigos session, thus using the combined skills of people and tools.
GenAI can support the application of test design techniques and test design approaches on a specific test basis to generate test cases. This is one of the first use cases people often think of and experience shows that it still requires proper human oversight to ensure the correctness and completeness of the resulting test cases and test data. GenAI is very good in generating test ideas and in creating checklists and heuristics to use as a basis to design detailed tests. It can also help distinguish and prioritize test cases based on quality risk or business value.
Strict privacy regulations have forced organizations to only use anonymized, or (even better) synthetic test data. GenAI is very suitable for the process of anonymization, and it can also generate realistic synthetic test data.
Writing and optimizing test automation code is basically a development skill and GenAI (especially Large Language Models) is very good at generating program code based on structured programming languages. GenAI is also brilliant in generating small tools to support the testing process. It can also help analyze and resolve issues encountered during running of the test automation scripts.
GenAI can orchestrate a test automation suite and schedule the execution of automated test cases, enabling continuous and efficient testing.GenAI also quickly improves its ability to support exploratory testing.
GenAI can analyze actual test results, compare them with expected outcomes, and – if necessary – support the writing of anomaly reports.
Overview