By now we’re all are fully aware that nothing can be achieved without having strong requirements in place. For some reason though, many of you just ignore this fact for the more technical, nonfunctional aspects such as why and how we build our IT services. In one of my recent projects, I found that nobody had any idea of the actual number of concurrent users and service requests per second. Obviously, starting a performance validation without any clue of the objectives is a bad idea. What would you measure and who would decide if a test passed or failed? Don’t fall into this trap. Your first question in an agile- or waterfall-based performance test should be: What are the performance requirements? If no detailed information is available, your first task should be to create meaningful performance requirements and KPIs.
Fully automated design and architecture validation
There are simply not enough engineers available who can identify all the dependencies in our complex IT services. Automation is not only your chance for endless scalability; it can also meet many of the challenges in which humans can become the bottleneck. We all agree that automated testing is needed to create the required coverage and reduce the time to market. For some reason, design and architecture validation is still a whiteboard task. Open your eyes and explore the tools on offer. Cloud-based services, virtualization, API first strategies and other developments are the drivers for increasing complexity. Our only chance of finding performance hotspots early is through automated architecture validation. We use transaction tracing, let the APM tools show the service flows involved and then compare the figures with the intended design.
Bring performance validation closer to coding
So much time and money is wasted in uncovering bugs too late in the software-development lifecycle. That’s why unit testing is king. This is especially important if you’re aiming for a high-quality product because the earlier we identify problems the easier they will be to fix completely. In the initial stages of the project, developers will still have the corresponding code in their minds, which makes the fix much simpler. There’s also a test-driven development that helps developers to look more from a tester’s perspective. The result is great unit-test coverage and built-in quality from the very beginning. TDD or test-driven design shifts the focus to functional validation. Performance tasks come later due to the limited concepts at this stage. Cucumber, for instance, is a widely used TDD solution. The corresponding Cucumber-performance framework offers you a scaled-down load test for checking the basic performance of new services. We need more of these generic TDD solutions that allow us to write test cases first, only once and then to reuse them for the functional, performance and security tests.
Continuous testing for performance
Automating as much as you can is not a bad idea. Once your business starts to grow, all the manual steps can create a bottleneck. Never delay performance testing to the last minute. It should be done fully automated and ideally, when nobody else is using the test environment. Share the performance figures with all your teams and build a powerful performance repository.
Know your limitations
When application usage is growing, response times can suffer and your happy customers will suddenly feel frustrated and stop using your services. This is happening on a daily basis – don’t make the same mistake as so many other companies out there. Guessing the limitations is a bad idea because you won’t understand the dynamics of your complex business services. A methodical approach such as performance testing, workload modeling, and performance engineering will help you see the breaking points and hotspots much more clearly. You’ll also learn whether or not tuning is required to support the intended number of users and service requests. Sometimes, extreme conditions or events on the market can impact the way your IT services are used. No matter how severe the condition that occurs though, knowing the limits in terms of supported requests and user volumes will mean you can sleep much more soundly.
Modern applications can scale on demand
When business demands are growing, many applications and websites suffer as a result. Autoscaling is currently a rising star and we’re all hoping that this approach will prevent us from running into slowdowns. In reality, however, it’s a fancy feature that still involves careful configuration and quality assurance. Manual testing won’t help in testing such auto-scaling scenarios. Once the configured scaling patterns are working as intended, performance testing is the way to validate. In many cases, additional tweaks are required before operational teams can confidently ramp up and down additional services.
Improve your services every day
Having reliable applications involves a journey and is not a destination. The fact that everything is constantly changing in our complex application landscapes forces us to keep an open mind and be ready to make continuous simplifications and improvements. Although some shortcuts help to push new code to production, if you miss improving everything in a continuous fashion, your application will die sooner or later. Workaround after workaround is never a good idea. Instead, you should build teams that are motivated to find and fix hotspots on all layers, every day.
Tools that simplify your job
We have big hopes for open-source load- and performance-testing solutions. I’ve used such freeware tools in many assignments but, like me, you’ll realize at some point that they don’t fulfil all your complex requirements such as full browser-based testing or network virtualization. It all depends in which context you are working in. Early performance testing works fine with open-source solutions, but once your tests are getting closer to full production-like simulation, you’ll realize that you need to rethink your tool strategy. Failing to invest in the right tools will often raise your labor costs. Finally, make sure you have enough skilled engineers before you decide to proceed with an open-source only load- and performance-testing strategy.
Happy performance engineering!