Understanding Performance Testing in AEM and AEM Cloud Service
In my last blog post, I discussed the best practices for building and executing performance tests with AEM as a Cloud Service. However, one crucial aspect that I purposefully left out is understanding what a performance test should look like, what constitutes a realistic test, and what insights the test results can offer about the behavior of your production environment.
These are important questions that are often overlooked or not even considered in the first place.
Today, I want to kick off a series of blog posts that will delve deeper into the world of performance testing in the context of AEM and AEM CS. This series aims to provide a broader understanding of performance testing, not just limited to AEM, but applicable to other web applications as well.
Unlike my previous posts, this series will focus on high-level concepts, steering away from specific AEM functionalities or APIs. Why? Because I’ve come to realize that many performance tests are conducted based on assumptions without proper documentation. This lack of explicit documentation often leads to confusion and delays when interpreting test results.
I’ve found myself in situations where clients are dissatisfied with test results and demand explanations from Adobe or where post-production performance issues arise despite flawless test results. In both scenarios, inadequate documentation of the test process only serves to exacerbate the problem.
As a result, this blog series can be seen as a preemptive measure to ensure that performance tests are comprehensive and well-documented from the start.
What is a Performance Test? And Why Do We Conduct Them?
A performance test serves as a tool to anticipate the future, specifically how a system will perform under predefined conditions. However, performance tests come with inherent uncertainties:
- They are predictive in nature, not definitive like a science experiment.
- The test scenario is often vaguely defined, based on past experiences and industry best practices.
Despite these uncertainties, a well-executed performance test can offer more reliable insights than mere intuition. This is crucial, especially in the realm of web applications where performance under production load is a critical factor.
Designing a Meaningful Performance Test
For a performance test to be effective, it must be based on a clearly documented mental model of the system and the anticipated reality. This model should outline all assumptions, goals, and known factors influencing the test execution and results.
While this mental model is a simplification of reality, it forms the basis for implementing, executing, and analyzing the test. Without this documented model, the test results lack context and meaning.
It’s important to acknowledge that the mental model will never be complete or up-to-date due to the dynamic nature of systems and environments. However, by documenting known unknowns and acknowledging uncertainties, the test becomes more reliable and actionable.
By having a well-documented model, it becomes easier to reason about test results, identify gaps between the test scenario and reality, and make informed decisions moving forward.
Stay tuned for part 2 of this series where we will delve into specific aspects that should be covered by your performance test model.
Published by Jörg
Jörg has over 15 years of experience working with AEM, first as an architect in Adobe’s consulting organization, and currently as a Site Reliability Engineer in Adobe’s AEM as a Cloud Service business. He focuses on the operation, stability, and performance of AEM, sharing his knowledge with the wider community.
Published: January 26, 2024 | Updated: May 8, 2024