Our technology landscape is changing all the time and impacts our daily lives. Twenty years ago, almost no mobile device existed, but these days, every primary school child has them in their pockets. We carry a tremendous amount of knowledge accessible instantly in our pockets which is only possible thanks to the massive developments in our technological landscapes. In the early 2000s, I worked for a mobile telco provider that invented WAP technology and provided basic internet capabilities for mobile phones. They did not believe that the internet on mobile devices could revolutionize the world. Three years later, apple invented the iPhone and changed history forever.
Our business applications show similar groundbreaking developments, such as a shift to the cloud, microservices, or artificial intelligence, which already impact our work.
Application -> Microservice
The impressive transformation from a single monolith application to microservices opens many opportunities. In the 70s and 80s, we mainly developed so-called 1-tier applications installed on a single mainframe server. In the 90s and 20s, the internet hype brought more appetite for scalable business applications, so we introduced fronted and backed layers to fulfill the growing internet community's needs. Reusability was still limited these days. In the 21st century, we realized that large applications must be split into reusable components, and we started developing microservices. Enterprise applications consist of hundreds of microservices these days.
Using the old last-minute performance testing approach no longer works. We need a more continuous performance engineering approach to avoid critical reliability problems in microservice-based enterprise applications.
Datacenter -> Cloud
Shifting from a local data center to the cloud is another groundbreaking achievement. We no longer install and maintain our operating systems on physical machines. Instead, we deploy our business applications on infrastructure-as-a-service or use the software-as-a-service provided by cloud providers such as AWS, Azure, or Google.
Such IaaS or SaaS-based environments come with built-in monitoring capabilities. We performance engineers must include more aspects such as bandwidth, round trip times, communication patterns, and geo-locations in our performance approach.
Bottleneck -> Autoscaling
We often had a single server hosting our business application in the old days. When the performance was not excellent, we increased its hardware by adding more CPUs, RAM, or more robust storage. In the modern cloud and IaaS age, we have more powerful capabilities, such as autoscaling, to handle higher request volumes. The beauty of this autoscaling procedure is that our IT infrastructure can be minimal when there is less traffic and scale out quickly when spiky request volumes arrive.
When performance testing and monitoring scalable services and applications, we must keep both vertical and horizontal scaling in mind. Vertical scaling is the process of adding more system resources. Horizontal scaling is known as the process of starting more services. Workload modeling and observability become more critical for auto-scalable systems because we must carefully review how the system under test behaves when the expected mix of requests arrives. Are the assigned limits appropriate, or do they result in performance bottlenecks? Is the scaling process resulting in a bad user experience?
Results -> Solutions
In the past, performance testing was often the process of simulating load and presenting the test results to project managers and development teams. It was the job of developers to identify bottlenecks and implement the improvements. We load and performance testers re-executed the tests to validate these improvements. The problem involved in these results-driven performance testing was that it took several weeks to improve and stabilize business applications.
As we move to a much shorter time-to-market cycle, it is more and more on performance engineers to design, implement and execute performance tests, identify bottlenecks, and work with DevOps teams to implement the tunings.
Impact on Performance Engineering?
To keep up with the changes in our technology landscapes, performance engineering must become
Lean
Continuous
Scalable
Self Service
Lean Performance Engineering is the idea of removing the dust from the old waterfall style and transforming it into a more lightweight agile performance engineering practice. Modernizing your tools and making them accessible to everyone in your organization can be starting point.
Continuous Performance Engineering becomes a must for all digitized businesses. Since we move so fast from dev to prod and must fulfill increasing user experience expectations, our only chance is to integrate performance as a continuous practice into our software life cycle. Performance validation is shifted left, starts early, and is repeated for every change or deployment.
Performance Engineering has become a much bigger topic these days. Many businesses started a Center of Excellent for Performance in charge of all performance engineering activities. The problem was always scaling these CoEs up and down according to demand. A much more innovative approach is to focus on the core performance engineering activities instead of doing all the work.
If Performance Engineering methods, practices, and tools are not accessible to the entire organization, you are set up for failure. When we deploy applications faster to production, we have less time to validate the user experience. Easy accessible, highly automated practices will be adopted, and others will be left behind. Make Performance Engineering a self-service really helps to bring these important practices to the enterprise and a shared responsibility.
Keep up the great work! Happy Performance Engineering!
[[..Pingback..]]
This article was curated as a part of #104th Issue of Software Testing Notes Newsletter.
https://softwaretestingnotes.substack.com/p/issue-104-software-testing-notes
Web: https://softwaretestingnotes.com