The provisioning and setup of test environments and everything that comes with it, is a major point of concern for DevOps teams in most organizations. This is because in DevOps the responsibility for the infrastructure has shifted from a separate department to the DevOps team itself. As a result of this shift, some important parts of the infrastructure's quality become a team responsibility. Knowledge of the infrastructure helps the team to ultimately improve the quality of the IT system as a whole.
To achieve this, it is essential that every team member has a basic understanding of the IT system's infrastructure and its constituent parts.
Example of constituent parts of an IT system.
As described in the World Quality Report, organizations are discovering that just putting the responsibility for test infrastructure with the individual teams often does not bring the best results for the organization as a whole. Organizations need to create a greater awareness of the issues that surround test environments; these include mappings, integration points and configurations that will make them fit for purpose. Organizations will benefit from implementing cloud-based test environments, but also virtualized, containerized and temporary-but-non-cloud-based test environments. Creating the relevant strategies (such as a virtualization strategy) cannot be expected from individual DevOps teams. Therefore, the organization will need a specialized support team that, amongst others, with the use of automated provisioning technologies, assists the DevOps teams in provisioning fit-for-purpose test environments. Apart from making the work of the individual DevOps teams more effective, in the end this will also raise the cost-efficiency for the organization.
|Test infrastructure consists of the facilities and resources necessary for the satisfactory execution of the test. It consists (among others) of test environments, test tools and workplaces.|
|A test environment is a composition of parts, such as hardware and software, connections, environment data, tools and operational processes in which a test is carried out.|
A topic closely connected to test infrastructure is test data management. This is described as a separate topic in TDM.
The configuration of the infrastructure (often cloud-based infrastructure) of the IT system nowadays is often defined using machine-readable code (so-called infrastructure-as-code). This results in a more efficient provisioning of environments and reduces the human error and replication of faults, which ultimately leads to fewer anomalies caused by testing errors.
When using a proper infrastructure-as-code (see example below) solution you will effectively determine the quality of the IT system including the infrastructure. The test set should be executed in an environment which has similar behavior and characteristics as the live system. It is important to have behavior such as permissions, access rights and integration with 3rd party interfaces in place.
|Infrastructure-as-code (IaC) is the process of managing and provisioning computer environments through machine-readable definition files, rather than physical hardware configuration or interactive configuration tools (based on [Wittig 2016]).|
In DevOps, people do not need to provision entire environments as they can use service virtualization tools to virtualize parts of the environment that are either unavailable for testing, are still in development, or are third-party services. With automation, complete, ephemeral environments can be created on demand and decommissioned when testing is complete, reducing both the time as well as the cost associated with maintaining environments. Cloud technology or containerization can help with this. And the adoption of virtual environments is a logical step for automating the delivery pipeline.
Example of infrastructure definition as code.
When performing integration tests the team may not be able to connect to all relevant systems. Service virtualization is a good option to replace systems that are not available. For more information see "Test automation". Apart from service virtualization there are other ways to simulate the integration with other (sub-)systems, for example by using self-developed stubs, drivers or mocks.
Because the team is responsible for parts of the infrastructure, it is also responsible for its quality. You can verify the configuration of the infrastructure components and the integration between those tools.
Examples of these kind of verifications are:
- The operation system in&
- Status of databases (up or down)
- Accessibility of servers (firewalls, port numbers)
- Size of internal and external memory
Easily set up environments
Cloud technologies and containerization technologies make it easy to set up and configure a (complete) environment on-the-fly. For every CI/CD pipeline stage a different fresh environment can be set up with a preloaded start situation. These environments can also exist in parallel next to each other. After the stage is completed and the data is analyzed and saved when needed, the environment can then be teared down as fast as it was created. Using containerization technologies has various advantages. One of which is that it becomes very easy to verify the behavior of an IT system with an upgraded operating system or upgraded libraries.
|Containerization is a lightweight alternative to full machine virtualization that involves encapsulating an application in a container with its own operating environment. This provides many of the benefits of loading an application onto a virtual machine, as the application can be run on any suitable physical machine without any worries about dependencies [Webopedia 2020].|
|Cloud technology is the use of various services, such as software development platforms, servers, storage and software, over the internet.|
In general, there are three cloud computing characteristics that are common among all cloud-computing vendors:
- The backend of the application (especially hardware) is completely managed by a cloud vendor.
- A user only pays for services used (memory, processing time and bandwidth, etc.).
- Services are scalable [Techopedia 2020].
Workstations and other infrastructure
The fact that people in DevOps together are responsible for all tasks and activities requires each of them to have access to a workstation with all the needed capabilities to enable them to perform the tasks. This creates specific demands for the performance and capacity of the computer hardware, especially when the test environment is on their own workstation using containerization. When cloud technology is used, this may have totally different demands for hardware.
Below we will provide a number of common examples of considerations relevant when determining the requirements people in DevOps have for their workstation.
- Different user accounts for testing the various roles that users can have
- Administrator rights – the team members must be able to install and uninstall software on the environment
- Processor-power – the workstation must have enough power to run the relevant software, tools etc.
- Memory – the workstation must have enough memory to run all systems
- Management and maintenance of the workstation and licenses
A team may decide to request one or more extra workstations for dedicated tasks such as long-running performance tests or regression tests.
Other computer infrastructure needed by the team may vary very much. For example, think of the huge number of different mobile devices today, but also new developments like connected devices on the Internet of Things.
With regards to network infrastructure, the team has to consider if the standard company network should be used or if the team needs a separate network to prevent interference between the activities of the team and of other systems.