top of page
Writer's pictureJosef Mayrhofer

Metrics for Performance Engineers

When designing and executing performance tests, we mainly focus on how real users will use the application in production. This end-game thinking is paramount for a successful performance test and involves the entire metric family, such as technical, business, and operational aspects. But how do they stick together, and why are these metrics so crucial for a successful performance engineer?


Metrics for building performance requirements

When crafting meaningful performance requirements, we talk to business teams to clarify usage volumes, growth patterns, response time, and throughput requirements. An excellent performance requirement includes business, technical, and operational metrics. If you miss out on one of them, you might build the wrong system or end up in performance troubleshooting on production.


Metrics for designing performance tests

One of the first questions every performance engineer should ask is: "What are your performance requirements?" If absent, you should return to the drawing board and ensure they are in place before starting your performance test design. As a performance expert, you consider usage volumes, the user and data mix, the correct use cases, and the test environment.


Metrics for executing and reporting performance tests

After you've executed your performance test, you must analyze your test results, identify bottlenecks, and set the test status to passed or failed. Deriving the test status is nothing that is based on guesswork. You need evidence of why a test passed or failed. To avoid uncertainties, comparing the required versus the actual results wipes out all doubts. Ideally, you list the performance requirements and explain in your test report if they are fulfilled or failed, and your developer must implement improvements.


Connecting the dots

For a successful performance test, we simulate the expected usage volume (business metric), measure response times (technical metric), and validate if problem alerting during overload situations (operational metric) works as intended.


Technical metrics describe how a system should behave in certain situations. Some examples of technical metrics are

  • Response times

  • Resource utilization

  • Throughput rate

  • Error rate

  • Processing time of batch processes

  • Software metrics such as code complexity



Business metrics describe what a system should support. Some examples of business metrics are:

  • Number of concurrent users

  • Number of users authorized to access

  • Number of transactions under average and peak periods

  • Service Level Agreement Breach or Compliance

  • Efficiency of business processes



Operational metrics do not directly affect the end-user experience, but in terms of a holistic simulation of production conditions, we include them in our performance test experiments. Some examples of operational metrics are:

  • Time to start up the application

  • Time to stop the application

  • Backup times

  • Duration of a data restore

  • Problem detection

  • Alerting behavior


As a performance engineer, you should remember the entire metric family to make your load and performance test successful. I hope you liked this blog post, and I look forward to your questions or comments.


Keep up the great work! Happy Performance Engineering!


Comments


bottom of page