When a company rolls out applications to the public or internal customers, the goal is for the end-user experience to make an impression that will compel the user to return. Part of that experience for the user is getting through their tasks quickly, without error and at any time they choose. Providing this to an end-user requires resources that have costs associated with them—hardware, software development, and support. These costs can be large and significant.
Performance testing is the quality assurance tool to bring these needs together and provide recommendations to achieve the most cost-effective solution. For the consultant’s client, identifying the area where performance goals can be reached with the best ROI is key for maintaining customer satisfaction.
Providing a client with the correct performance testing methodology, which includes scenario modeling, test tools, and analysis, is the main goal for an engagement. Defining performance scenarios is the critical step to identifying performance bottlenecks. This is accomplished through discussions with the business and IT to identify critical business flows and confirm what parts of the application are exercised the most.
Tools like UCML diagrams can be used to provide a visual representation of the business flows. Retrieving data from web/application logs or analytics tools, such as Google analytics, is very important in determining a good starting point for application usage. Combined with expected user load on the system, a good representation of system usage can be defined and presented to the client to make the decisions on pass/fail exit criteria.
The next step is implementing the information on application usage into repeatable performance scenarios. Besides creating robust performance scripts, another critical step in developing repeatable tests is architecting a test environment that is a uniform starting point for every test cycle.
For example, testing against a database that has 1000 rows of data versus a production-size database with millions of rows will yield very different results. One solution to this is database virtualization. By using a tool such as Delphix Data Virtualization, you can create database clones so performance tests can be conducted against a production-size database and restored to an initial state in order to return consistent results.
Continuing on this concept of correctly duplicating a production environment for testing, there is service virtualization. In the modern application environment, many third-party application programming interfaces (APIs) are called to retrieve data for business flows. Unfortunately, most of these third-party APIs do not have test environments or will cost the customer more if a test environment is configured to use the production API environments.
HPE Service Virtualization is a solution that mimics API responses under varying conditions and will allow code to be fully exercised. When installed in a test environment, monitoring tools, such as Appdynamics Application Intelligence Platform, can provide real-time insights into application bottlenecks from bad code to hitting resource limits on the servers in the application’s architecture.
Once all the metrics are gathered from a test, you can begin analysis to best address performance issues and give recommendations on the most cost effective way to address the short falls. You may need to evaluate one-time costs (such as additional development hours or purchasing more hardware) or recurring costs (such as deploying more virtual hosts in the cloud which can be less expensive than re-architecting the software or starting from the beginning with a brand new solution).
The value of performance testing is to provide a look into the future of an application. Identifying problems with load modeling, tools, and analysis in the present will help businesses achieve the goals they have laid out for their success.