top of page
Writer's pictureJosef Mayrhofer

What are the top Pitfalls in Load and Performance Testing

It's great to see that the awareness for application performance is growing, but we must keep in mind that a successful load and performance test consists of more than tools alone. It requires vast experience to build a meaningful simulation of your current or future growth load patterns. This post will provide some insights about top load and performance testing pitfalls you should avoid.


# No NFRs in place

I have seen so many performance tests trying to start without any idea about response time, throughput, or system resource utilization requirements. Some of these projects were in a big hurry and ignored my advice. It was no surprise that nobody had an idea if this application is good enough for deployment to production because we had no acceptance criteria. Never start your performance testing without meaningful performance requirements!


# Wrong Load pattern or workload model

A too high or too low number of users and transactions is a risk for your investment in load and performance testing. Eighter you create unnecessary troubles our you won't identify the real hotspots. It will help if you challenge the provided workload model. Check the user and transaction mix on production or use Littles law to ensure that you are using an appropriate volume of concurrent users.


# Sizing of the load testing environment

Only in rare cases can performance tests be executed in a production-like environment. A smaller setting requires a scale-down of your user and transaction volume. Your test won't succeed if you run the production-like workload on your much smaller testing stages. Usually, I run experiments with different load volumes to validate the scaling factor and check if my understanding of this application and environment are correct.


# Last-minute performance testing

Don't be surprised that your performance test executed a few days before deployment on production uncovers significant bottlenecks. Time to production is vital. You should treat performance testing very similar to functional tests and make it part of the entire value stream.


# Not appropriate Test Cases

There are different approaches how to pick a test case for load and performance testing. Many of them are wrong. For instance, a developer who identified a few long-running requests involved performance engineers to use these test cases in their performance tests. I agree that tuning these crawling requests is essential, but eventually, these do not represent those requests that generate the most significant proportion of the load on production.


# Use wrong data volume and data mix

It would be best if you always kept in mind that the data volume and data mix significantly impact your application's performance. There can be a caching that stores a certain amount of previously used data sets, and your services appear very fast if they are used with the same data set on and on again. Check your data volume and ensure that your dynamic data reflects an actual production-like situation!


# Simulation approach

There are many options for injecting the load volume, and all of them have their relevance. If you simulate and tune API level requests only and your users will use the web-based front-end, it might be that they won't be satisfied with the performance.


# Park performance defects in the backlog forever

We should never follow a checkbox-based performance testing approach because our internal or external partner is expecting it. When we uncover bottlenecks, we must log defects, bring them to our developer's attention and ensure that a fix is implemented as part of the next sprint.


# Skip the retest of performance improvements

Raise a defect, explain the problem to developers, and move on is not good enough. In reality, this happens very often. Performance defects receive a lower priority, and due to the time pressure, developers are only focusing on significant functional defects.


# No performance monitoring and transaction tracing in place

An excellent crafted performance test will only show that something is not responding as expected or crashing, but you won't see such issues' real root cause. Keep in mind that performance testing goes hand in hand with performance monitoring and transaction tracing. Never start a performance test before you ensured that all requests are getting traced across all layers.


What are your learnings from previous performance testing projects?


Keep doing the great work! Happy Performance Engineering!






3,293 views1 comment

Recent Posts

See All

1件のコメント


Russell Luke
Russell Luke
2021年4月15日

I think this is a great list! During my 20 years of helping to create performance monitoring tools I've seen all of these happen or an alarmingly frequent basis. It is one of the reasons we just released our new DBmarlin database monitoring, making sure it helps any performance tester with relational technology in the mix.


russell.luke@applicationperformance.com

いいね!
bottom of page