pnp.gif

How To: Conduct Performance Testing

J.D. Meier, Prashant Bansode, Scott Barber, Mark Tomlinson

Applies To


Summary

This How To provides a high-level introduction to a basic approach to performance-testing your applications and the systems that support those applications. Performance testing is typically done to help identify bottlenecks in a system, establish a baseline for future testing, determine compliance with performance goals and requirements, and collect other performance-related data to help stakeholders make informed decisions related to the overall quality of the application being tested. In addition, the results from performance testing and analysis can help you to estimate the hardware configuration required to support the application(s) when you “go live” to production operation.

Contents

Objectives


Overview

At its most basic level, the performance testing process can be viewed in terms of planning and preparation, test execution, data analysis, and results reporting. These activities occur multiple times, more or less in parallel. In some cases the activities are applied sequentially and are repeated until performance testing is deemed complete. In the interest of simplicity, this How To follows this type of sequential and iterative approach. The activities below are represented as a simple but effective sequence of steps that are easy to apply to any performance testing project, from small-scale unit-level performance testing to large-scale production simulation and capacity-planning initiatives. The advantage of this approach lies in the fact that the same overall concepts and activities can be applied equally effectively to both expected cases and unexpected, exceptional cases, rather than prescribing specific actions for every possible circumstance.

Performance testing can be thought of as the process of identifying how an application responds to a specified set of conditions and input. To accomplish this, multiple individual performance test scenarios (such as suites, cases, and scripts) are often needed to cover the most important conditions and/or input of interest. To improve the accuracy of a performance test’s output, the application should, if at all possible, be hosted on a hardware infrastructure that is separate from your production environment, while still providing a close approximation of the actual environment. By examining your application’s behavior (the output) under simulated load conditions (the input), you usually can identify whether your application is trending toward or away from the desired performance characteristics.

Some of the most common reasons for conducting performance testing can be summarized as follows: Performance testing of Web applications is frequently subcategorized into several types of tests. Two of the most common are load tests and stress tests. Additionally, performance testing can add value at any point in the development life cycle; for example, performance unit testing frequently occurs very early in the life cycle, while endurance testing is generally saved for very late in the life cycle. (For more information, see Explained: Types of Performance Testing.

Input

Common input items for performance testing include:

Output

Common output items for performance testing include:

Steps

Step 1. Identify Desired Performance Characteristics

You should start identifying, or at least estimating, the desired performance characteristics early in the application development life cycle. Record the performance characteristics that your users and stakeholders would equate to a successfully performing application, in a manner that is appropriate to your project’s standards and expectations.
Characteristics that frequently correlate to a user’s or stakeholder’s satisfaction typically include:
{See How To: Quantify End-User Response Time Goals and How To: Identify Performance Testing Objectives for more information about capturing and recording desired Performance Characteristics.}

Step 2. Identify Test Environment

The degree of similarity between the hardware and network configuration of the application under test conditions and under actual production conditions is often a significant consideration when deciding what performance tests to conduct and what size loads to test. It is important to remember that it is not only the physical environment that impacts performance testing, but also the business or project environment.

In addition to the physical and business environments, you should consider the following when identifying your test environment:

Identify Physical Environment

The key factor in identifying your test environment is to completely understand the similarities and differences between the test and production environments. Some critical factors to consider are:

Identify the Business Environment

Consider the following test project practices:

Considerations

Step 3. Create Test Scripts

Key user scenarios for the application typically surface during the process of identifying the desired performance characteristics of the application (Step 1). If this is not the case for your test project, you will need to explicitly determine the usage scenarios that are the most valuable to script. {For more information, see How To: IdentifyKeyScenariosThatHasn’tBeenWrittenYet.} To create test scripts from the identified or selected scenarios, most performance testers follow an approach similar to the following:

Scenario StepData InputsData Outputs
Log on to the applicationUsername (unique)Password (matched to username)
Browse a product catalogCatalog Tree/Structure (static)User Type (weighted)Product Description Sku# Catalog Page Title Advertisement Category


Only after you have detailed the individual steps can you effectively and efficiently create a test script to emulate the necessary requests against the application to accomplish the scenario {For more information, see HowTo:blah}.

Considerations

Step 4. Identify Metrics of Interest

When identified and captured correctly, metrics provide information about how your application’s performance compares to your performance objectives. In addition, metrics can help you identify problem areas and bottlenecks within your application.

Using the desired performance characteristics identified in step 1, identify metrics to be captured that focus on measuring performance and identifying bottlenecks in the system.
When identifying metrics, you will either use direct objectives or indicators that are directly or indirectly related to those objectives. The following table presents an example of a metric corresponding to the performance objectives identified in step 1.

MetricAccepted level
Request Execution timeMust not exceed 8 seconds
Throughput100 or more requests / second
% process timeMust not exceed 75%
Memory Available25 % of total RAM

Considerations

Step 5. Create the Performance Test

The details of creating an executable performance test are extremely tool-specific. Regardless of the tool that you are using, creating a performance test typically involves taking a single instance of your test script (or virtual user) and gradually adding more instances and/or more scripts over time, thereby increasing the load on the component or system.

To determine how many instances of a script are necessary to accomplish the objectives of your test, you first need to identify a workload that appropriately represents the usage scenario related to the objective.

Identifying a Workload of Combined User Scenarios

A workload profile consists of an aggregate mix of users performing various operations. Use the following conceptual steps to identifying the workload:
For more information about how to create a workload model for your application, see “How to - Model a Workload for a Web Application” at <<Add url>>

Creating the Performance Test

After you have identified the workload for each user scenario to be tested, create your performance test by performing the following conceptual steps:

Step 6. Test Execution

After the previous steps have been completed to a appropriate degree for the test you want to execute, do the following:

Considerations

Step 7. Analyze the Results, Report and Retest

Consider the following important points while analyzing the data returned by your performance test:
Note: If required, capture additional metrics in subsequent test cycles. For example, suppose that during the first iteration of load tests the process shows a marked increase in memory consumption, indicating a possible memory leak. In subsequent test iterations, additional memory counters related to generations can be captured, allowing you to study the memory allocation pattern for the application.

Considerations

Use current results to set priorities for the next test.

Resources

<<TBD>>