What is Infrastructure testing?
Infrastructure testing is that part of a test project covering the product risks that relate to the target infrastructure. Typical projects with such product risks include hardware migrations, lifecycle management projects or newly built system deliveries. Such projects may provide interesting challenges for any test manager.
This building block gives an overview of which technical aspects may be involved when testing infrastructure and how to address such aspects within a test strategy. We start by briefly covering the InFraMe® standard areas of infrastructure:
The client consists of the client device (hardware), the OS, applications and user settings. It may also be referred to as ‘workspace’ or ‘workplace’. It allows the user to access software and data stored locally on the client and/or centrally in a data center or the cloud. In a virtualized setting, applications or even complete clients run on servers and are deployed to the client as if they were running locally.
A typical workspace migration project will involve new client hardware, a new OS and migration of the applications and user settings. When virtualized, a new virtualization backend will also be in scope. Workspace migration is usually considered to be an infrastructure project, but it implies a substantial effort in both technical and functional testing.
A system may require parts to be hosted on one or more servers. These may be physical servers on “bare metal” hardware or virtual servers created on a shared hardware cluster. It is crucial to know the appropriate sizing of these servers with regards to a.o. CPU’s, memory and storage (see below), preferably based on projections for the (near) future.
Servers can be used for various purposes. Commonly application servers will host one or more applications and database servers will host one or more databases. Examples of other types of servers are webservers, file servers, mail servers, print servers, time servers and proxy servers
The choice for an OS to run on an application server will impact any application running on it. It is important to realize different applications may or may not be compatible with a given OS or a specific OS version. Also when installing the OS it will be necessary to create zones or partitions to divide the systems resources over the various parts of the system.
Servers may run any number of services: software function(s) that can be used by the clients. Services may allow for information retrieval, the execution of operations, anti-malware, management agents, application monitoring, resource sharing etc.
A network contains the infrastructure needed to make servers communicate with each other, clients and other networks. Connecting a system properly within a network may require firewalls to be in place and configured, network changes, proxy servers or stepping stone servers, amongst other things.
Often networks are divided in network segments which may have different security policies. For example a separate segment, often called a DMZ, may handle connectivity with the internet so the main company network is not exposed. Sometimes a supplier or a merged company may also have a separate network segment, restricting access to the main company network.
Systems within a network may have availability requirements so strict that specific measures are needed to fulfill these. Examples may be twinning mode, where the load on a system is balanced over two active locations, hot standby, where a system is operational but inactive on a separate location, or cold standby, where a system is installed but inactive on a separate location. Correct implementation of such measures requires the network architecture to support these modes. Using virtualization within the network may provide standardized software solutions for supporting this. Also these modes may require a procedure on how to handle a disaster scenario.
Usually monitoring solutions are available within the network to signal critical issues, such as server downtime, excessive resource usage, system access events, hardware temperature and OS patch levels amongst other things. Monitoring may also be done on application management level.
This may refer to physical or virtual storage, either of which may be provided from a network disk (SAN) or dedicated for the system. Sometimes a Network Attached Storage (NAS) may also be used. This is basically a stand-alone server with the sole purpose of file storage and may for example be used to physically move large amounts of data.
Part of the storage should be set aside for backups, when required of course. Backups can be taken on storage level, using shared backup services accessible from the network , but also on application level, using specialized software products installed on the systems’ servers. This information should be available from the infrastructure requirements, as well as which parts of the system and which environments need to be backed up and how often, and how many backups are retained and for what length of time
Within InFraMe® this is defined as all generic facilities supporting applications not part of the OS or the maintenance tooling. Examples of these may be messaging, queueing, databases, workflow and application servers. Middleware may also include specialized applications for some of these functions.
Be aware that a project on any application that uses middleware may impact this middleware as well. For example reports generated by an application may be pulled by a middleware component towards another application. A change in these reports may require a change in the middleware too.
When middleware is maintained as an application this may lead to confusion regarding the test management approach. It may not be evident what the coverage of a functional or user acceptance test should be for a queueing or messaging tool. In these cases one should be flexible with the test approach but also do thorough inquiry. Sometimes functionality may be hidden in the middleware layer! A common example of this is scripting running on a database. These should often be included in functional testing.
Ideally a technical team includes a specialist on testing of infrastructure. Unfortunately this is often not the case and technical specialists test their own work.
Often the people best suited to this role have a broad knowledge of technical topics related to IT infrastructure and an affinity with testing and/or quality. Infrastructure testers bring in-depth knowledge of how to test different aspects of the infrastructure, such as standard test cases for specific types of deliverables, and of structured methodologies such as Sogeti’s TMap|Infrastructure.
Technical specialists often test their own work based on their knowledge and experience of their area of expertise. Unfortunately with the current rate of innovations and the increasing complexity of IT landscapes they may not be able to assess the full impact of their work on the infrastructure as a whole. Therefore it is necessary to facilitate them to document these tests using a structured methodology such as TMap|Infrastructure.
The people within the infrastructure maintenance organization fulfil a crucial role in any structured infrastructure testing project. They will need to be aligned at an early stage to help set requirements, provide input during PRA, support test specification and execution activities and accept the. This will lead to an easier transition of a new system into maintenance state and prevent the hidden costs of resolving escaped defects and issues.
Technical project manager
This is the person usually in charge of delivering a technically operational solution. He will generally not only be concerned with delivering the infrastructure but also with installation and configuration (or migration) of the application. His team may include specialists on connectivity, OS, storage, backup or DBA’s. The technical project manager may be helpful in translating a test manager’s needs to technical experts unfamiliar with structured testing methodologies.
Product Risk Analysis
As described in TMap|Infrastructure structured testing of infrastructure is based on risks. Good starting points for document study are usually the infrastructure business requirements or the low-level design. Interviews can be held with specialists, however bear in mind they might have a singular focus on their area of expertise. Brown paper sessions can typically be held with architects, maintenance, specialists, security, suppliers, change/release managers and others.
The product risk analysis can include such product risks as non-standard or multiple OS versions, network connectivity, custom interfaces, compliancy, compatibility, security exceptions, availability requirements, data migrations, recovery times and measures, maintenance access, load expectations etc.
Of special interest are those areas where the infrastructure interacts with the application. These usually relate to other quality attributes such as performance, connectivity, security, maintainability or availability.
Technical specialists often use their in-depth knowledge and experience as a basis for testing their deliveries. As a result documentation of executed tests is often seen as an unnecessary administrative burden. However infrastructure requirements have often been translated into high and low level designs, and possibly amended with change requests. Additionally some product risks may require input from different technical specialists and perhaps even application specialists to cover properly. Therefore it is advisable to have documentation of what the technical specialists have tested and what the results were.
This may need some guidance from a test specialist, since technical specialists may not have experience with this. It is important to keep documentation lean and mean with as little meta-content as possible. Facilitate the experts by instructing them in specifying test cases based on the product risks in a structured way. If that is not feasible you may use checklists instead to minimize the administrative burden on the technical specialists while still obtaining a record of what has been tested.
Approaching infrastructure testing in a structural way is necessary since infrastructure is continually increasing in complexity. Preferably a methodology designed specifically for testing infrastructure should be used, such as TMap|Infrastructure.
Product risk analysis
The PRA is central to a test strategy according to TMap|Infrastructure. From experience brown paper sessions are identified as the most effective way to perform these. However such sessions should concern a single product only to prevent long sessions with large attendance decreasing participation. As a test manager you may perform the role of moderator, however it is crucial to make sure you are sufficiently informed about the subject matter so you can speak ‘infrastructure language’.
The term Infrastructure covers so much ground it is impossible to find a test expert with in-depth knowledge of it all. Therefore it is crucial to make use of the knowledge available to you in your project. These can be the technical specialists involved in the project or the staff of the infrastructure operations department. It is vital to teach your resources to think in risks instead of solutions, so they can come up with the criteria that have to be met before going to the operational phase. Usually these resources are extremely scarce and in demand, so communicating your resourcing requirements at an early stage is crucial to your test projects’ success.
To this end a test manager will also require a high-level understanding of the infrastructure products in scope of your product. You should not need to have in-depth knowledge of individual products, however you will need to be able to interpret an infrastructure design in high-level and grasp the relationships between different components. Only then can you oversee the entire test scope, align the right resources and identify any gaps there may be in the test strategy.