Optimistic people may think: "What could possibly go wrong?" Realistic people will answer: "Anything!" DevOps teams are aware that in the process of IT delivery many things can go wrong. If a product is created and without second consideration deployed to the users, unpleasant surprises may happen. Therefore, every product should be looked upon by another team member than the person that created the product. It's amazing what someone else sees in a product of which the author is convinced that it is perfect.
On this page you will see that executing tests on a ready-to-deploy product can reveal problems, but that it is much more effective and efficient to first review these products. These reviews are part of static testing.
Static and dynamic testing
Testing consists of static and dynamic testing. High-performance IT delivery teams often strongly focus on dynamic testing. Mature teams have learned their efforts can be much more efficient by applying static testing. This is commonly called reviewing.
Static testing consists of three groups. We will discuss each of these groups in the following sections.
The most common way of reviewing in high-performance IT delivery is the informal review. Not many rules apply but the quality is still assured in an efficient way. We distinguish several approaches of informal reviewing.
Individual informal review
An author sends a deliverable to someone of whom he expects that he/she is able to assess (certain aspects of) the quality of that deliverable with the request to review it. The other person then reviews it and gives feedback. That's all there is to an individual informal review. It is a very easy and low-cost way of reviewing. However, there is little insight into the breadth and depth of reviewing.
The effectiveness of an individual informal review can be increased by using checklists or heuristics. Usually, the author is responsible for organizing the review process and for chasing the reviewers whenever necessary.
When using a check-out / check-in mechanism for code, as is common in continuous
|A pull request is a method of submitting contributions to a development project by which a developer, after making a change to code in a topic branch, asks for this change to be committed to the main branch (that is to be included in the main repository). This involves static testing (i.e. reviewing) of the changed code, for example to check if the change was done properly and if it complies with maintainability and other guidelines to code quality.|
The objective of the pull request is that the developer who changed the code asks another person to review the code and verify if the change was OK.
A pull request is therefore an important review moment which is usually performed as an informal review in which tools such as checklists and heuristics can be used.
The amigos approach is an approach whereby representatives of the various capabilities in a team get together to review a deliverable. In DevOps we commonly identify four capabilities in the cross-functional team, Business analysis, Development, Testing and Operations. They are called the four amigos. Whenever a deliverable needs to be reviewed, the four amigos study the deliverable and get together to discuss their findings. Because of the discussion and the exchange of views, a four amigos session is usually more effective than individual informal reviews.
In Model-Based Review (MBR), models are means to an end, the end being verifying that a deliverable is clear and complete. The tester composes one or more models so that end users, analysts, developers etc. can verify the tester's understanding of the subject. The sources could be tangible documents such as user story cards, but also "in the heads of anyone". The basic compelling idea behind MBR is that models are unambiguous by nature, so flaws such as incompleteness, inconsistency and incorrectness will be recognized more easily.
Models also provide a limited view of reality; several models often need to be composed to represent a complete picture of what is in the design artifacts or "in the heads of" those involved. For example, a process is best represented by a flow diagram, but a "Yes/No" decision in that process could be subject to several basic "Yes/No" conditions. These conditions could be modeled individually and explicitly in the flow diagram, but the model of preference for conditions is the decision table or pseudo code.
The model is often discussed within the team and with the stakeholders. After they agree that the model is good enough, it will serve as a basis in the development process, during test design and execution and in the operational stage of the IT system.
In security testing there is a specific technique for static analysis focusing on the security characteristics. For more information see "Security testing".
Other informal review techniques
Like threat modelling there are more review techniques focused on a specific quality characteristic. Other forms of general static testing may also exist or arise. It is therefore impossible to be complete in this section.
Based on the IEEE1028 standard we use three types of formal reviews (see figure below).
In DevOps teams (as well as other high-performance IT delivery teams) these formal types of reviews are usually not often applied. When you would like to apply them, the following aspects are relevant.
Working together to prevent waste: structured reviews can help a project in gaining advantage of reviews by early detection, preventing defects and getting a product fit for its purpose.
The phases in a structured review process are planning, preparation, execution, approval, finalization and reporting. A moderator announces the review and supports the review process in which selected reviewers judge a product from their own perspective. The author reworks his product based on the merged list of comments after which an approval phase is started by the moderator. After the rework is approved, the review process is finalized by a review report.
Some might notice that structured reviewing does not mention review techniques. Reason for that is that the chosen technique does not affect the structure of the process.
The moderator facilitates the review process and informs the author and reviewers of each step and each activity that needs to be performed.
Based on experience, the role of an active moderator is vital for the success of the review process. The moderator facilitates, administrates, inspires, triggers, reports, assists, chases and reminds in order to keep the reviewprocess going on. The moderator’s top priority is to get the quality of the documents up to par.
The author of a document or product makes sure his deliverable is up to par as much as needed. A request for review is announced and handled via the moderator in order to get assistance from the reviewers to get the document to the required level of quality. The author treats the review comments as feedback, with the intention to handle all comments seriously and rework the document for each single comment. Unclear comments are clarified by contacting the reviewer, a session for clarification of the comments is requested to the moderator if the rework might not be right the first time.
For each role, as defined in a review matrix (an extract of a RACI), at least one reviewer should participate in a review. Ideally 1 reviewer per role should participate because that will lead to the most efficient reviews. More reviewers per role will likely lead to duplicate comments. Next to the reviewers identified by the review matrix, extra reviewers can be invited to participate. For instance, subject matter experts can be part of this group.
Reviewers judge the deliverable from its own professional role. With respect to the author, the reviewer is polite, clear and to the point. When noting down a comment, the reviewer makes sure to be complete and detailed enough so the author can understand the comment and is helped in reworking the document.
The moderator does not participate in the reviews but can perform some checks during the intake.
The role of management is important but limited. It starts with supporting the review process and encouraging the team to participate. It also concerns the creation of a safe, open, environment in which errors can be made and fixed without personal consequences.
Together with the moderator, the responsible manager fills the review matrix in which is decided which document types to review and which roles must or should participate in the review. The responsible manager decides on the timelines and the moment on which a review should be started. The moderator will put these agreements into the review plan.
All other managers, if not in the role as a reviewer, are involved as recipients of review reports, status reports and quality reports which can be used as input to plans and project plans.
On the start of the implementation of a review process, the details of the review process, including a review matrix, timelines and scope, are described in a review plan. In some cases, it can be useful to prepare and use some check lists and guidelines.
All activities around the reviews need to be administered, leading to an extensive administration. Input for this are the individual and merged review forms. Using this administration and the metrics that are gathered, each review will be accompanied by a specific review report.
Data of multiple reviews can be combined with other metrics and conclusions, leading to an overall review report. This report can contain statuses and indicators regarding product quality and process quality.
To gain advantage of reviews by early detection, preventing defects and getting a product fit for its purpose, there are some critical success factors to keep in mind. Critical for the success is the commitment of senior management to have a review process in place. The (dedicated) support by a skilled moderator is absolutely needed to get the review process implemented and to keep the process going. Select a review technique that suits the product and the situation to get the most out of the review effort.
Since the reviewing heavily depends on the quality and quantity of the reviewers’ input, it is vital to keep this open stream of information going. Training the reviewers and instructing the management about how to use the metrics can be useful measures for this. The first improves the quality and ROI of the reviews, the second should prevent functional judgment of the reviewers or authors.
Apart from people doing reviews, tools can also be used for static testing. This we call static analysis. A basic form of static analysis that many people subconsciously use is the spelling and grammar checker of a word processor. This is a good example of a tool that supports the quality improvement of texts.
Similar tools exist for program code, these are generally called static analyzers. These may be tools that are solely meant for this specific purpose, but may also be integrated in a compiler, for example. Types of faults that a static analyzer can detect are wrong spelling, violation of rules of the programming language but also logic faults such as unreachable code or unused variables. Static analyzers will report faults, but they will also issue warnings when the analyzer cannot determine for sure whether it is a fault or intended behavior.
A specific measure for the maintainability of program code is the cyclomatic complexity. The only practical way to measure this is using a tool that calculates it. The outcome can be compared to the standard that the organization has set to determine if the program complies.
Other static analysis techniques
There is a wide variety of tools that support forms of static analysis. Some tools are generally applicable, many tools have a specific use for a development environment or software package.
Overview static testing
Static testing consists of three groups of approaches with several approaches within these groups.
Please note that the IEEE1028 standard also defines Audits and Management Reviews. Since these techniques are about reviewing the process instead of the product, we have not included these techniques here.
During a review anomalies are likely to be found. An often-recurring question is how to register these deviations between expectation and what is described. When reviewing a document, the anomalies may be registered with the document itself (for example as a comment using the word-processor). For reviews where the anomalies cannot be in the document itself a regular anomaly management system can be used, so that the anomalies from static testing and from dynamic testing are registered in the same system which facilitates easy follow-up.