Welcome to the 4th post in our blog series on performance test modeling, where we explore different aspects of creating effective performance tests. If you haven’t checked out the previous posts, be sure to start with part 1 for an overview and links to all articles in this series.

In parts 2 and 3, we discussed key elements to consider when modeling your performance tests:

  • Modeling the expected load, typically represented as “concurrent users”.
  • Creating a realistic model of the system under test, including relevant content and data.

In this post, we’ll delve into how to derive specific test scenarios from your data, as no single test can accurately determine the end-user performance.

The Basic Performance Test Scenario

Let’s consider a simple model where the traffic rate remains consistent throughout the day, reflecting the performance test setup:


While this model may seem straightforward, it’s crucial to incorporate background noise and account for cache warm-up phases to obtain accurate results.

Adjusting the test plan to reflect these considerations yields a more realistic scenario:


Conducting tests with properly modeled user behavior and data requests can yield valuable insights, especially for scenarios with fluctuating request rates.

For a comprehensive performance test, consider running it for 2-3 hours, incorporating a warm-up phase of at least 30 minutes to ensure cache and system readiness. Gradually increasing the load and analyzing bottlenecks can provide crucial performance data.

Variations of the Basic Performance

While the initial model depicts an average day, you can explore variations to address additional questions:

  • Testing at higher load levels (e.g., 120%) to assess system performance under stress.
  • Running extended tests (24-48 hours) to detect issues like memory leaks over time.
  • Conducting overload tests to observe system behavior beyond normal limits.
  • Simulating backend failures and recoveries to evaluate system resilience.

By mastering the basic performance test, you can then explore different scenarios to validate system performance under various conditions.

Stay tuned for part 5, where we’ll discuss addressing performance test results misalignment and preparing for unexpected production environment behaviors.