The growing dialogue about the changing software testing trends — especially amid the rise in DevOps and Agile practices — is spawning an epidemic of theories about where the industry is heading. While many of these theories are legitimate, others appear less so. One area of discussion that sets off an alarm bell for me is the idea that performance engineering will replace performance testing.
The basic premise of this argument is that instead of executing performance test scripts, the emphasis will shift to analyzing how all the items of the application work together. In other words, performance, security, usability, hardware, software, configuration and business value will all be inspected by performance engineers while they collaborate and iterate on the ingredients considered of the highest value. Then, they’ll deliver these items quickly, ensuring a top-grade product every step of the way.
With 20 years of software testing experience, I couldn’t disagree more with this claim. Performance testing is an invaluable part of application development and, as such, instrumental in ensuring high speed, scalability and stability of the system. I’m hardly alone to think this way.
Just consider that an increasing number of IT professionals are embedding performance testing in the product life cycle from the very beginning. In the spirit of Agile’s and DevOps’ continuous development, it’s safe to say that this trend will accelerate at dizzying speed. What this means is that analyzing performance data behind the scenes, making sure all features meet performance success criteria along with the functional requirements and ensuring basic performance best practices are followed must be conducted in real time if the development cycle is to run without a hitch. Bear in mind that this is only a few items that testers need to scratch off their list — the sheer volume of tasks is prolific. Put differently, the industry is recognizing performance testing as crucial for determining application speed, responsiveness and stability under different load scenarios.
The average cost of network downtime is around $5,600 per minute which amounts to well over $300,000 per hour, according to Gartner. Expecting consumers to be tolerant until your site is up and running again is the equivalent of asking them to shop at your competitor. So many organizations are bleeding money as a result of poor performance, whether they lose important data or suffer public embarrassment after a system crash. I can’t emphasize it any further that many of these problems would be eliminated with the right support from performance testers.
Performance engineers don’t have the time to monitor performance properly.
With Agile and DevOps, performance engineers wrestle with a whole new host of problems: an environment that may not be ready for performance monitoring, shortage of time to conduct the tests, changes that are implemented only after the performance engineer had already monitored and tested the application and so on. Then there’s a lack of support for developers to repair defective environments and an absence of documentation that accurately chronicles how a feature works. All this and more points to the need for performance testers who will accurately and efficiently test system performance.
Objectivity and independence are key.
Software testers are hired, in part, because of their outsider perspective that enables them to provide impartial insights into the application. As autonomous agents, they can find properties that would otherwise be missed by the insiders, especially in Agile and DevOps contexts where code is written in real time. Think about it: If the system is being tested by the very person who has written the software, there’s a high risk of emerging bias. Developers are largely motivated by having their product released as quickly as possible; they aren’t as keen to deconstruct the application as someone who wasn’t part of its design. Just as you wouldn’t expect a writer — even the most talented writer — to edit their own work, the same rule should apply to the design and testing of software applications.
Performance testing isn’t easy.
Many false truisms have surfaced in the IT community over the past few years, but the most far-fetched theory I heard is that anyone who knows how to use a performance tool can be a performance tester. Performance testing isn’t like other testing where test coverage is a principal consideration — test accuracy matters a lot more. There’s no metric to assess performance quality; testers must rely on their intelligence and creativity to mimic end-user access patterns. This vacuum of gold standard to rate performance has led many managers to assume performance testing can be done by, well, virtually anyone. I expect that any sort of amateur performance testing to be phased out as more companies embrace DevOps and Agile. Companies won’t be able to afford mobile applications that take long to run, especially in peak periods.
Performance testers excel at reporting.
While there’s a lack of consensus showing that performance quality has passed the grade, performance testers still need to assess the performance and report on it. This involves measuring the performance of individual layers, including the web server, application server, network performance, etc. and chronicling glitches for other stakeholders to read. As such, reporting isn’t an engineering activity. Running a surgical analysis to explain why different layers don’t meet the service-level agreement is usually not something engineers are trained to do.
Conclusion
Production performance testing is absolutely vital to ensuring that the system can handle user traffic without any hiccups, checking how the system will scale and how much infrastructure is required for traffic spikes, particularly peak loads. I don’t think it’s reasonable to expect performance engineers to carry out all this and shoulder all the responsibility in the event of stability complications and other unforeseen malfunctions.