Performance Testing in Software Testing
In software testing, performance testing is typically a testing practice performed to determine how a system performs in terms of responsiveness and stability under a specific workload. It may also investigate, measure, validate or verify other quality attributes of the system such as scalability, reliability, and resource utilization.
Performance testing is a computer science practice that seeks to create performance standards in system implementation, design, and architecture.
Types of Performance Testing
1. Load Testing
Load testing is the simplest form of performance testing. A load test is usually performed to understand the behavior of a system under a specified expected load. This load can be the expected number of users on the application who make a certain number of transactions within a specified period. This test will show the response times of all important business-critical transactions. Databases and application servers are also monitored during the test, this will help identify bottlenecks in the application software and hardware on which the software is installed.
2. Stress Testing
Stress testing is commonly used to understand the upper limits of capacity within a system. This type of test is performed to determine the robustness of the system to extreme loads and helps application administrators determine whether the system will perform adequately if the current load exceeds the expected maximum.
3. Soak Testing
Soak testing is also known as endurance testing. Soak testing is usually done to determine if the system can sustain the expected load continuously. During the soak test, memory usage is monitored to detect potential leaks but often overlooked is performance degradation, i.e. ensuring that throughput and/or response times after some prolonged period of activity are as good or better than at the start of the test. It basically involves applying a significant load to the system for an extended, significant period of time. The goal is to discover how the system behaves under constant use.
4. Spike Testing
Spike testing is performed by suddenly increasing or decreasing the load generated by a large number of users and observing the behavior of the system. The goal is to determine if performance will suffer, the system will fail, or it will be able to handle dramatic changes in load.
5. Breakpoint Testing
Breakpoint testing is similar to stress testing. In this testing additional load is applied over time while the system is monitored for predetermined failure conditions. Breakpoint testing is sometimes called capacity testing because it can be done to determine the maximum capacity below which the system will perform according to its desired specifications or service level agreements. The results of a breakpoint analysis applied to a given environment can be used to determine the best scaling strategy depending on the required hardware or conditions that trigger scaling-out events in the cloud environment.
6. Configuration Testing
Instead of testing performance from a load perspective, tests are designed to determine the effects of configuration changes in system components on system performance and behavior. A common example is experimenting with different load-balancing methods.
7. Isolation Testing
Isolation testing is not unique to performance testing but involves repeating the test process that resulted in a problem in the system. Such testing can often isolate and confirm the fault domain.
8. Internet Testing
It is a relatively new form of performance testing when global applications such as Facebook, Google, and Wikipedia are tested against load generators that are placed on the actual target continent whether physical machines or cloud VMs. These tests usually require a lot of preparation and supervision to perform successfully.
Setting Performance Goals
Performance testing can serve a variety of purposes:
- It can demonstrate that the system meets performance criteria.
- It can compare two systems to find the best performance.
- It can measure which system components or workloads cause poor system performance.
Many performance tests are conducted without setting sufficiently realistic, goal-oriented performance goals. The first question from a business perspective should always be, “Why are we doing performance testing?”. These considerations are part of the business case for testing. Performance goals will vary depending on the technology and purpose of the system, but should always include some of the following:
Concurrency and throughput in performance testing
If a system identifies end users through some form of login mechanism, then a synchronization goal is highly desirable. By definition, it is the largest number of concurrent system users that the system is expected to support at any one time. A scripted transactional workflow can affect real concurrency especially if the repetitive part consists of login and logout activity.
If the system has no concept of end users, the performance goal is likely to be based on maximum throughput or transaction rate.
Server response time
It refers to the time taken by one system node to respond to a request from another. A simple example would be an HTTP ‘GET’ request from a browser client to a web server. This is what all load testing tools measure in terms of response time. It may be relevant to set server response time targets between all nodes in the system.
Render response time
Load testing tools have difficulty measuring render response time, as they typically have no concept of what happens inside a node other than identifying the time where ‘on the wire’. There is no activity to measure render response time, it is usually necessary to include functional test scripts as part of a performance test scenario. Many load-testing tools do not offer this feature.
Performance specifications
It is important to define performance specifications in detail and document them in any performance test plan. Ideally, this is done during the requirements development phase of any system development project, prior to any design effort.
However, performance testing is often not performed against a specification. For example, no one has expressed what the optimal response time should be for a given population of users. Performance testing is often used as part of the performance profile tuning process. The idea is to identify the “weakest link” -inevitably the part of the system that, if made to respond faster, will result in the overall system running faster.
Sometimes it is difficult to identify which part of the system represents the critical path, and some test tools include tools that run on the server and report transaction times, database access times, network overhead, and other server monitors, which can be analyzed with raw performance statistics. Without such tools, one would have to lean on the Windows Task Manager on the server to see how much CPU the performance tests are loading.
Performance testing can be done across the web, and even in different parts of the country, as it is known that internet response times vary regionally. This can also be done internally, although routers will need to be configured to introduce the lag that typically occurs on public networks. Loads should be introduced into the system at realistic points. For example, if 50% of the system’s user base will be accessing the system through 56K modem connections and the other half through T1, then the load injectors will have either connection The load should be applied to the same mixture or the following the same user profile, simulate the network latency of such connections.
It is always helpful to have a statement of the likely peak number of users who can be expected to use the system during peak hours. If there can be a statement of what the maximum allowable 95 percentile response time is, then an injector configuration can be used to test whether the proposed system meets this specification.
Prerequisites for Performance Testing
A stable build of the system should match the production environment as closely as possible.
To ensure consistent results, the performance testing environment should be separated from other environments, such as user acceptance testing (UAT) or development. As a best practice, it is always advised to have a separate performance testing environment as much as possible from the production environment.
Test conditions
In performance testing, it is often important that the test conditions are similar to the expected actual use. However, this is difficult to arrange in practice and not entirely possible, as production systems are prone to unpredictable workloads. Test workloads can mimic what happens in a production environment as closely as possible, but only in the simplest systems can one exactly mimic the workload variability.
Loosely coupled architectural implementations have created additional complications with performance testing. To truly replicate production-like states, enterprise services or assets that share a common infrastructure or platform require coordinated performance testing, with all users generating production-like transaction volumes and load on a shared infrastructure or platform. Because this activity is so complex and expensive in terms of money and time, some organizations now use production-like conditions in their performance testing environment to understand and verify capacity and resource requirements.
Timing
It is critical to the cost-effectiveness of a new system that performance testing efforts begin at the beginning of the development project and extend through deployment. The later a performance defect is discovered, the higher the cost of remediation. This is true in the case of functional testing, but even more so with performance testing, given its end-to-end nature of scope. It is critical for the performance testing team to get involved as early as possible, as it takes time to acquire and develop the testing environment and other critical performance requirements.
Tools for Performance Testing
Performance testing is mainly divided into two main categories:
Performance Scripting
This part of performance testing mainly deals with the creation/scripting of workflows of key identified business processes. This can be done using a variety of tools.
Each of the tools mentioned in the above list uses either a scripting language or some kind of visual representation to create end-user workflows. They can be created and copied. Most tools allow something called “record and replay”, where the performance tester will launch the testing tool, deploy it to a browser, and record all network links between the client and the server. By doing this a script is generated which can be extended/modified to simulate different business scenarios.
Performance monitoring
This creates another facet of performance testing with performance monitoring, the behavior and response characteristics of the application under test are observed. The following parameters are usually monitored during the performance test process.
Server hardware parameters
- CPU usage
- Memory usage
- Disk usage
- Network usage
As a first step, the patterns generated from these 4 parameters provide a good indication of where the bottleneck is. To determine the root cause of a problem, software engineers measure which parts of a device or software contribute most to poor performance, or to maintain an acceptable response time. Use tools such as profilers to establish pit levels.
Technology
Performance testing technology employs one or more PCs or Unix servers to act as injectors, each simulating the presence of a number of users and each running a series of automated interactions with the host whose performance is being tested. Typically, a separate PC acts as the test conductor, integrating and collecting metrics from each injector and gathering performance data for reporting purposes. The usual setting is to scale the load: starting with a few virtual users and increasing the number over time to a predetermined maximum.
The test result shows how performance varies with load, in terms of the number of users versus response time. Various tools are available to perform such tests. Tools in this category typically perform a set of tests that simulate real users against the system. Sometimes the results can show oddities, for example, while the average response time may be acceptable, there are outliers for some key transactions that take significantly longer to complete something which May be due to inefficient database queries, images, etc.
Performance testing can be combined with stress testing, to see what happens when acceptable loads are exceeded. Does the system crash? How long does it take to recover if a large load is lost? Does its failure cause collateral damage?
Analytical performance modeling is a method of modeling the behavior of a system in a spreadsheet. The model is fed with measurements of transaction resource demands (CPU, disk I/O, LAN, WAN), weighted by transaction mix. The resource demands of the weighted transactions are added to obtain the hourly resource demands and divided by the hourly resource capacity to obtain the resource load.
Using the response time formula (R=S/(1-U), R=response time, S=service time, U=load), response times can be calculated and calculated with performance test results. can go. Analytical performance modeling allows the evaluation of design options and system sizing based on actual or anticipated business use. It is therefore much faster and cheaper than performance testing, although it requires a thorough understanding of the hardware platforms.
Tasks to be performed
Tasks for performing such tests will include:
- Decide whether to use internal or external resources for testing, depending on in-house expertise.
- Gather or elicit performance requirements from users or business analysts.
- Develop a high-level plan including requirements, resources, timelines, and milestones.
- Develop a detailed performance test plan (including detailed scenarios and test cases, workloads, environment information, etc.).
- Select the test tool.
- Explain the need for test data and charter effort (often overlooked, but essential to performing an accurate performance test).
- Develop proof-of-concept scripts for each application/component under test, using selected test tools and strategies.
- Develop a detailed performance test project plan, including all dependencies and associated timelines.
- Install and configure the injector/controller.
- Set up the test environment (ideally the same hardware as the production platform), router configuration, quiet network, server instrumentation deployment, and database test sets created, etc.
- Dry run the tests – Before actually executing the load test with predefined users, a dry run is done to check the correctness of the script.
- Run the tests – possibly repeatedly to see if any unaccounted factors may affect the results.
- Analyze results – either pass/fail or critical path investigation and recommend corrective action.
Performance Testing Process
Performance Testing Web Applications
According to the Microsoft Developer Network, the performance testing process consists of the following activities:
1. Identify the Test Environment
Identify the physical test environment and the production environment as well as the equipment and resources available to the test team. The physical environment includes hardware, software, and network configuration. Gaining a thorough understanding of the entire test environment early on enables more effective test design and planning and helps you identify testing challenges early in the project. In some situations, this process should be revisited periodically throughout the project’s life cycle.
2. Identify Performance Acceptance Criteria
Identify response time, throughput, and resource utilization goals and constraints. In general, response time is a user concern, throughput is a business concern, and resource utilization is a system concern. Additionally, identify project success criteria that cannot be met by these objectives and constraints. For example, using performance tests to evaluate which combinations of configuration settings will result in the most desirable performance characteristics.
3. Plan and Design Tests
Identify key scenarios, determine variability among representative users and how to replicate that variability, define test data, and establish metrics to be collected. Integrate this information into one or more models of the system used to implement, execute, and analyze.
4. Configure the Test Environment
Prepare the test environment, tools, and resources necessary to execute each strategy, as features and components become available for testing. Ensure that the test environment has the resources necessary for monitoring.
5. Implement the Test Design
Develop performance tests according to test design.
6. Execute the Test
Run and monitor your tests. Validate tests, test data, and results collection. Perform validation tests for analysis while monitoring the test and test environment.
7. Analyze the Results, Tune, and Retest
Analyze, consolidate and share results data. Make a tuning change and test again. Compare the results of the two tests. Each improvement made will return a smaller improvement than the previous improvement. when do you stop When you reach a CPU bottleneck, the choices are to either optimize the code or add more CPU.