top of page
Writer's pictureJosef Mayrhofer

Dreams and reality in performance testing

Updated: May 20, 2022

During my career, I’ve seen many slow-loading and unreliable applications. All these projects had one thing in common—a disappointing user experience due to poor reliability or other issues. I’m sure you’ve encountered these problems as well. If, like many of us, you haven’t got round to identifying the actual causes, you end up in firefighting mode. This exercise is dangerous, time-consuming  and always frustrating.


Then we wonder why we spend so much time ‘jumping from one problem to another.

But why is it that our wonderful applications don’t run fast enough when real users access them?


A few weeks ago, I read the following lines in a book written by a friend:

If you emphasize quality at the end of a project, you emphasize on system testing. However, testing cannot fix design-time mistakes. If you emphasize quality in the middle of the project, you emphasize on construction practices. If you emphasize at the beginning of the project, you plan for, require and design a high-quality product.

I can also describe the problem like this:

If you start the development process with designs for a low-quality car, you can test all you like, but it will never be a sports car.

How can we fix this problem?

One thing is sure—you can’t fix performance problems by testing in late development stages. I’ll tell you why. Based on the metrics I’ve collected from hundreds of performance-testing projects, I’ve found that more than 60 percent of performance problems are related to application design and implementation. It’s good that performance engineers identify breaking points during system testing, but there’s a flip-side that often prevents the developers from implementing a successful fix. The problem is that we cannot change the design of our applications in system-testing stages. Not even Agile development methods can help us avoid these issues. Even scrum teams tend to simply implement a workaround instead of stepping back, fixing the design issue, and then continuing when performance has reached the level required.


Based on my experience, performance has to be built into your application from the very start. Although this idea is by no means new, it generally takes a lot of commitment as well as a few bad performance experiences before our decision-makers finally decide to make performance an intrinsic part of everyone’s daily work.


And performance engineering certainly shouldn’t stop after deployment to production. That’s because your data volumes, content, and user activities will obviously change over time. As we can’t anticipate those changes, it’s impossible to create the required mindset for performance review and optimization accurately. Instead, you can build performance dashboards during the testing stages, and hand them over to your operation teams post-deployment.


You can visualize all the relevant performance metrics and train your teams to recognize both the usual and the critical levels. As these dashboards are great information radiators, projects benefit enormously by making them accessible to a wider audience.

in short, I recommend integrating the steps below into your SDLC:

  1. design for performance

  2. code for performance

  3. conduct continuous performance testing

  4. integrate performance testing

  5. use E2E scenario performance testing

  6. monitor for performance

  7. conduct continuous performance audits and optimization

Rome wasn’t built in a day!  I recommend you make a plan for implementing these steps gradually, and begin by applying this practice  to only one of your projects. You can then improve it over time before rolling it out to the entire value stream.


Feel free to contact me at any time with your questions related to performance testing and engineering.

Happy performance engineering!

19 views0 comments

Recent Posts

See All

Yorumlar


bottom of page