Exploring the Impact of Test System and Test Data on Performance Testing
Welcome to post 3 in my series on Performance Test Modelling, where we delve into the intricacies of designing realistic performance tests. If you missed the earlier posts, make sure to check them out for a comprehensive understanding of this topic.
In the previous posts, we emphasized the significance of having a well-defined model for performance tests and highlighted the need for accurate load factors, typically measured by “concurrent users,” to create a meaningful test scenario.
Understanding the Influence of Test System and Test Data
In this post, we will focus on how the test system and test data impact performance tests and their outcomes. We will conduct a series of thought experiments and evaluate the results based on the cache-hit ratio of a Content Delivery Network (CDN) as a proxy metric.
Experiment 1: Testing a Simple Site with CDN
Let’s consider a basic website with 1 page, 5 images, and 1 CSS and JS file, all hosted on a CDN. If we test this site with 100, 500, and 1000 concurrent users, what results do we expect?
Surprisingly, we would likely observe similar results across all concurrency levels due to the CDN serving files rapidly with a high cache-hit ratio. However, this test merely validates the CDN’s performance rather than the site itself, raising questions about the test’s relevance.
While such tests may seem redundant, they underscore the importance of considering the test data’s adequacy for a comprehensive performance evaluation.
Experiment 2: Testing with Varied File Distribution
Next, let’s test with 100,000 files and 100 concurrent users, utilizing a CDN with an 8-hour cache TTL. In this scenario, we anticipate a gradual increase in the cache-hit ratio, reflecting the cache-warming phase and eventual cache misses due to expiration.
However, real-world requests exhibit non-random distribution, with specific files receiving significantly more requests, creating hotspots. By incorporating this distribution into our test, we achieve a more realistic cache-hit ratio, emphasizing the importance of simulating actual user behavior.
Striking a Balance for Effective Performance Testing
As we aim for realistic test results, we encounter complexities in test design, data preparation, and modelling, especially when factoring in additional variables like user behavior and search engine interactions. Balancing test realism with practicality becomes crucial to derive meaningful insights from performance tests.
While achieving a high level of realism in tests demands effort, it also enhances the predictive accuracy of test outcomes. By understanding the intricacies of test data and scenarios, we can optimize our performance test models for actionable results.
About the Author
Published by Jörg on February 9, 2024. Last updated on May 8, 2024.
Jörg, with over 15 years of experience in Adobe Experience Manager, brings a wealth of knowledge in system operation, stability, and performance. Stay tuned for the next post in this series, where we explore various scenarios to enrich your performance test models.