Updated: Feb 14
Nowadays, information technology is at the heart of every business. Outages or slowdowns in critical software components often impacts the whole organization.
According to research from Aberdeen, performance is still an afterthought. A minority of 3 % identify the source of delays, and just 9 % perform root-cause analysis of application problems. Response time measurement is also widely ignored. One in five organizations is collecting response times metrics. It seems that continuous performance optimization is generously neglected. In this post, I will explain some reasons for this surprising development and give you advise how to avoid those pitfalls.
Improve and forget is a bad advice
Successful businesses learned years ago that slow responding applications are both, a nightmare for support teams and a frustrating experience for users. Naturally, they walked in a valley of tears for some time and were struggling with response time issues.
However, they transformed their software development pipeline towards performance and often considered performance from day one. Non-functional requirements exist from day one, and the whole construction team is aware how to examine these in design, implementation and testing stages. Once their new products passed all performance tests and got deployed into production, the performance suddenly degrades.
Close the loop
Performance engineering is a continuous process. A perfect designed and conducted load and performance test is a risk mitigation that the new application will be able to handle the simulated load under certain data constraints. Both, the workload and the data volume can change quickly in production. So, did you design performance tests which addressed all those uncertainties?
One of the pitfalls in our performance engineering space is neglecting continuous performance optimization. Certainly, you keep performance considerations during application design and test in mind. You eliminate response time hotspots on pre-production stages. The problem is often that due to changes in the environment, data or user activities, user experience degrades and nobody is aware of this poor development.
Continuous performance engineering requires a closed loop approach. Start early in the life cycle, repeat the measurements regularly and extend performance reviews also into production environments. It makes much sense to share performance metrics collected on production with developers and testers because they can support your troubleshooting and adjust their performance testing approach to the current situation.
All things considered, performance is more a journey than a destination.