How To: Conduct Performance Testing Core Steps for a Web Application

J.D. Meier, Carlos Farre, Prashant Bansode, Scott Barber

Applies To

Summary

Performance testing involves a set of common core Steps that occur at different stages of projects. Each Step has specific characteristics and tasks to be accomplished. These Steps have been found to be present — or at least to have been part of an active, risk-based decision to omit one of the Steps — in every deliberate and successful performance-testing project that the authors and reviewers have experienced. It is important to understand each Step in detail and then apply the Steps in a way that best fits the project context.

Contents

Objectives

Overview

This How To provides a high-level introduction to the most common core Steps involved in performance-testing your applications and the systems that support those applications. Performance testing is a complex Step that cannot effectively be shaped into a “one-size-fits-all” or even a “one-size-fits-most” approach. Projects, environments, business drivers, acceptance criteria, technologies, timelines, legal implications, and available skills and tools simply make any notion of a common, universal approach unrealistic.

There are some Steps that are part of nearly all project-level performance-testing efforts. These Steps may occur at different times, be called different things, have different degrees of focus, and be conducted either implicitly or explicitly, but when all is said and done, it is quite rare when a performance-testing-project does not involve at least making a decision around the seven core Steps identified and referenced throughout this guide. These seven core Steps do not in themselves constitute an approach to performance testing; rather, they represent the foundation upon which an approach can be built that is appropriate for your project.

Overview of Steps

The following sections discuss the seven Steps that most commonly occur across successful performance-testing projects. The key to effectively implementing these Steps is not when you conduct them, what you call them, whether or not they overlap, or the iteration pattern among them, but rather that you understand and carefully consider the concepts, applying them in the manner that is most valuable to your own project context.

Starting with at least a cursory knowledge of the project context, most teams begin identifying the test environment and the performance acceptance criteria more or less in parallel. This is due to the fact that all of the remaining Steps are affected by the information gathered in Steps 1 and 2. Generally, you will revisit these Steps periodically as you and your team learn more about the application, its users, its features, and any performance-related risks it might have.

Once you have a good enough understanding of the project context, the test environment, and the performance acceptance criteria, you will begin planning and designing performance tests and configuring the test environment with the tools needed to conduct the kinds of performance tests and collect the kinds of data that you currently anticipate needing, as described in Steps 3 and 4. Once again, in most cases you will revisit these Steps periodically as more information becomes available.

With at least the relevant aspects of Steps 1 through 4 accomplished, most teams will move into an iterative test cycle (Steps 5-7) where designed tests are implemented by using some type of load-generation tool, the implemented tests are executed, and the results of those tests are analyzed and reported in terms of their relation to the components and features available to test at that time.

To the degree that performance testing begins before the system or application to be tested has been completed, there is a naturally iterative cycle that results from testing features and components as they become available and continually gaining more information about the application, its users, its features, and any performance-related risks that present themselves via testing.

Summary Table of Core Performance-Testing Steps

The following table summarizes the seven core performance-testing Steps along with the most common input and output for each Step. Note that project context is not listed, although it is a critical input item for each Step.


Step Input Output
Step 1. Identify the Test Environment Logical and physical production architecture Comparison of test and production environments
Logical and physical test architecture Environment-related concerns
Available tools Determination of whether additional tools are required
Step 2. Identify Performance Acceptance Criteria Client expectations Performance-testing success criteria
Risks to be mitigated Performance goals and requirements
Business requirements Key areas of investigation
Contractual obligations Key performance indicators
Key business indicators
Step 3. Plan and Design Tests Available application features and/or components Conceptual strategy
Application usage scenarios Test execution prerequisites
Unit tests Tools and resources required
Performance acceptance criteria Application usage models to be simulated
Test data required to implement tests
Tests ready to be implemented
Step 4. Configure the Test Environment Conceptual strategy Configured load-generation and resource-monitoring tools
Available tools Environment ready for performance testing
Designed tests
Conceptual strategy Validated, executable tests
Available tools/environment Validated resource monitoring
Available application features and/or components Validated data collection
Designed tests
Step 6. Execute the Test Task execution plan Test execution results
Available tools/environment
Available application features and/or components
Validated, executable tests
Step 7. Analyze Results, Report, and Retest Task execution results Results analysis
Performance acceptance criteria Recommendations
Risks, concerns, and issues Reports

Summary of Steps

The seven core performance-testing Steps can be summarized as follows.

Step 1. Identify the Test Environment

The environment in which your performance tests will be executed, along with the tools and associated hardware necessary to execute the performance tests, constitute the test environment. Under ideal conditions, if the goal is to determine the performance characteristics of the application in production, the test environment is an exact replica of the production environment but with the addition of load-generation and resource-monitoring tools. Exact replicas of production environments are uncommon.

The degree of similarity between the hardware, software, and network configuration of the application under test conditions and under actual production conditions is often a significant consideration when deciding what performance tests to conduct and what size loads to test. It is important to remember that it is not only the physical and software environments that impact performance testing, but also the objectives of the test itself. Often, performance tests are applied against a proposed new hardware infrastructure to validate the supposition that the new hardware will address existing performance concerns.

The key factor in identifying your test environment is to completely understand the similarities and differences between the test and production environments. Some critical factors to consider are:

Considerations

Consider the following key points when characterizing the test environment:

Step 2. Identify Performance Acceptance Criteria

It generally makes sense to start identifying, or at least estimating, the desired performance characteristics of the application early in the development life cycle. This can be accomplished most simply by noting the performance characteristics that your users and stakeholders equate with good performance. The notes can be quantified at a later time.

Classes of characteristics that frequently correlate to a user’s or stakeholder’s satisfaction typically include:

Considerations

Consider the following key points when identifying performance criteria:

Step 3. Plan and Design Tests

Planning and designing performance tests involves identifying key usage scenarios, determining appropriate variability across users, identifying and generating test data, and specifying the metrics to be collected. Ultimately, these items will provide the foundation for workloads and workload profiles.

When designing and planning tests with the intention of characterizing production performance, your goal should be to create real-world simulations in order to provide reliable data that will enable your organization to make informed business decisions. Real-world test designs will significantly increase the relevancy and usefulness of results data.

Key usage scenarios for the application typically surface during the process of identifying the desired performance characteristics of the application. If this is not the case for your test project, you will need to explicitly determine the usage scenarios that are the most valuable to script. Consider the following when identifying key usage scenarios:

When identified, captured, and reported correctly, metrics provide information about how your application’s performance compares to your desired performance characteristics. In addition, metrics can help you identify problem areas and bottlenecks within your application.

It is useful to identify the metrics related to the performance acceptance criteria during test design so that the method of collecting those metrics can be integrated into the tests when implementing the test design. When identifying metrics, use either specific desired characteristics or indicators that are directly or indirectly related to those characteristics.

Considerations

Consider the following key points when planning and designing tests:

Realistic test designs include:

Step 4. Configure the Test Environment

Preparing the test environment, tools, and resources for test design implementation and test execution prior to features and components becoming available for test can significantly increase the amount of testing that can be accomplished during the time those features and components are available.

Load-generation and application-monitoring tools are almost never as easy to get up and running as one expects. Whether issues arise from setting up isolated network environments, procuring hardware, coordinating a dedicated bank of IP addresses for IP spoofing, or version compatibility between monitoring software and server operating systems, issues always seem to arise from somewhere. Start early, to ensure that issues are resolved before you begin testing.

Additionally, plan to periodically reconfigure, update, add to, or otherwise enhance your load-generation environment and associated tools throughout the project. Even if the application under test stays the same and the load-generation tool is working properly, it is likely that the metrics you want to collect will change. This frequently implies some degree of change to, or addition of, monitoring tools.

Considerations

Consider the following key points when configuring the test environment:

Step 5. Implement the Test Design

The details of creating an executable performance test are extremely tool-specific. Regardless of the tool that you are using, creating a performance test typically involves scripting a single usage scenario and then enhancing that scenario and combining it with other scenarios to ultimately represent a complete workload model.

Load-generation tools inevitably lag behind evolving technologies and practices. Tool creators can only build in support for the most prominent technologies and, even then, these have to become prominent before the support can be built. This often means that the biggest challenge involved in a performance-testing project is getting your first relatively realistic test implemented with users generally being simulated in such a way that the application under test cannot legitimately tell the difference between the simulated users and real users. Plan for this and do not be surprised when it takes significantly longer than expected to get it all working smoothly.

Considerations

Consider the following key points when implementing the test design:

Step 6. Execute the Test

Executing tests is what most people envision when they think about performance testing. It makes sense that the process, flow, and technical details of test execution are extremely dependent on your tools, environment, and project context. Even so, there are some fairly universal tasks and considerations that need to be kept in mind when executing tests.

Much of the performance testing–related training available today treats test execution as little more than starting a test and monitoring it to ensure that the test appears to be running as expected. In reality, this Step is significantly more complex than just clicking a button and monitoring machines.

Test execution can be viewed as a combination of the following sub-tasks:
  1. Coordinate test execution and monitoring with the team.
  2. Validate tests, configurations, and the state of the environments and data.
  3. Begin test execution.
  4. While the test is running, monitor and validate scripts, systems, and data.
  5. Upon test completion, quickly review the results for obvious indications that the test was flawed.
  6. Archive the tests, test data, results, and other information necessary to repeat the test later if needed.
  7. Log start and end times, the name of the result data, and so on. This will allow you to identify your data sequentially after your test is done.

As you prepare to begin test execution, it is worth taking the time to double-check the following items:

Considerations

Consider the following key points when executing the test:

Step 7. Analyze Results, Report, and Retest

Managers and stakeholders need more than just the results from various tests — they need conclusions, as well as consolidated data that supports those conclusions. Technical team members also need more than just results — they need analysis, comparisons, and details behind how the results were obtained. Team members of all types get value from performance results being shared more frequently.

Before results can be reported, the data must be analyzed. Consider the following important points when analyzing the data returned by your performance test:

Most reports fall into one of the following two categories:

The key to effective reporting is to present information of interest to the intended audience in a manner that is quick, simple, and intuitive. The following are some underlying principles for achieving effective reports:

Resources