Performance Test Modelling: Part 5 – Final Thoughts
Welcome to the final post in our series on performance test modelling! If you missed any of the previous parts, be sure to check them out for a complete overview.
In our last discussion, we explored how the system being tested impacts performance results, as well as how test modelling and content play a role in the outcome. Today, we’ll delve into what to do when the predicted results of a performance test don’t align with reality, and how to address performance issues post go-live.
The Performance Test Doesn’t Meet Expectations
Every performance test includes two key data points: the number of concurrent users and the expected result, such as completing a transaction within a specific timeframe. If the test fails to meet these criteria, it’s essential to evaluate the assumptions made during the test. Were any of them overly optimistic or pessimistic? By validating and refining these assumptions, you can modify the test and reexecute it to improve results.
At times, even with accurate assumptions, issues may arise, such as high latency in cache misses or backend overload. In such cases, performance tuning or upscaling may be necessary. By having a solid test model and data, you can have productive discussions with stakeholders, including Adobe, to address performance issues effectively.
Scenario 2: Post Go-Live Performance Challenges
In scenarios where performance tests were not conducted or their results didn’t align with production realities, outages and performance issues can occur post go-live. Mitigation becomes the primary focus, often requiring backend upscaling and tailored workarounds. Identifying the root cause of performance disparities is crucial to implementing long-term solutions and avoiding similar issues in the future.
A well-documented performance test model allows for a comparison of assumptions with real-world scenarios, aiding in resolving production performance problems efficiently. Without such a model, addressing performance issues becomes more challenging, often leading to ad-hoc testing in production, which can be chaotic and risky.
Conclusion
Having a robust performance test model is key to minimizing post-go-live issues and improving system performance. By investing time and effort in test design, you can quickly identify and resolve performance challenges, ensuring a smoother operational experience. I hope this series has provided valuable insights to enhance your performance testing practices.
Thank you for joining me on this journey through performance test modelling. While this series comes to a close, I believe certain topics deserve further exploration in future blog posts. Stay tuned for more in-depth discussions on performance testing best practices.