top of page

Modernize Performance Testing in 7 Steps

We achieve nothing if we don't have substantial requirements in place, but for some reason, we seem to be ignoring this fact for the more technical, nonfunctional aspects, such as why and how we build our IT services.

Build strong and meaningful key performance indicators.

In one of my recent projects, nobody had an idea regarding the number of concurrent users and service requests per second. Starting a performance validation without any glue of the objectives is a bad idea. What would you measure, and who would decide whether a test passed or failed? Don't run into this trap. Your first question in any agile or waterfall-based performance test should be: What are the performance requirements? If no detailed information is available, your first task should be creating meaningful performance requirements and KPIs.


Design and architecture validation.

Not enough engineers are available who can identify all the dependencies of our complex IT services. Automation is your chance for endless scalability and seems to be the solution for many challenges in which humans become the bottleneck. We all agree that Testing must be automated to create the required coverage and reduce the time to market. For some reason, design and architecture validation is still a whiteboard task. Open your eyes and explore what modern tools have to offer. Cloud-based services, virtualization, API-first strategies, and other developments drive increasing complexity. Our only chance to find performance hotspots early is the concept of automated architecture validation. We use transaction tracing, let the APM tools show involved service flows, and compare such figures with the intended design.


Bring performance validation closer to coding.

Unit testing is king and most important for a high-quality product because the earlier we identify problems, the easier they are to fix. Developers still have the corresponding code in their minds, and repairing such well-known code is much easier. There is also a test-driven development, which helps to let developers think more from a tester perspective. This leads to excellent unit test coverage and built-in quality from the beginning. TDD or test-driven design puts the entire focus on functional validation. Cucumber is, for instance, a widely used TDD solution. The corresponding cucumber performance framework allows you a scaled-down load test to check the essential performance of new services. We need more such generic TDD solutions to write test cases first, only once, and reuse them for functional, performance, and security tests.


Continuous Testing for performance.

Automating as much as you can is not a bad idea. Once your business is growing, all manual steps can become a bottleneck. Performance testing is nothing you should keep for the last minute. It's a task that must be carried out automated and, in the best case when nobody else uses the test environment. Share the performance figures with all your teams and build a robust performance repository.


Understand your limitations.

When application usage grows, response times can suffer, and your once-thrilled customers will become frustrated and stop using your services. This is happening daily, and you should not make the same mistake as many other companies. Guessing limitations is a bad idea because you can't simply understand the dynamics of your complex business services. A systematic approach such as performance testing, workload modeling, and performance engineering helps to understand breaking points and hotspots much better. You will learn if tuning is required to support the intended number of users and service requests. Sometimes, extreme market conditions or events can impact how your IT services are used. No matter what severe conditions occur, you can sleep much better if you know the limits regarding user volume boundaries.


Modern applications can scale on demand.

When business demands are growing, many applications and websites suffer. Auto-scaling is a rising star, and we all hope this approach will prevent us from running into slowdowns. It is a fancy feature but also involves careful configuration and quality assurance. Manual Testing won't help in testing such auto-scaling scenarios. Performance testing is your only chance to validate if configured scaling patterns are working as intended. In many cases, tuning is required before operational teams can entirely rely on auto-scaling features.


Improve your services every day.

Reliable applications are a journey and not a destination. Everything is changing in our complex application landscapes, and we all must be very open-minded for permanent simplification and improvements. Some shortcuts help push new code to production, but your application will fail sooner or later if you fail to improve everything continuously. Workaround after Workaround is never a good idea. Build teams motivated to find and fix hotspots on all layers; every day is the way to go.


Happy performance engineering!


bottom of page