@@ -251,26 +251,6 @@ schedule an A/B-Test in buildkite, the `REVISION_A` and `REVISION_B` environment
251251variables need to be set in the "Environment Variables" field under "Options" in
252252buildkite's "New Build" modal.
253253
254- ### A/B visualization
255-
256- To create visualization of A/B runs use ` tools/ab_plot.py ` script. It supports
257- creating ` pdf ` and ` table ` outputs with multiple directories as inputs. Example
258- usage:
259-
260- ``` sh
261- ./tools/ab_plot.py a_path b_path --output_type pdf
262- ```
263-
264- Alternatively using ` devtool ` running the script in the dev container with
265- pre-installed dependencies.
266-
267- ``` sh
268- ./tools/devtool sh ./tools/ab_plot.py a_path b_path --output_type pdf
269- ```
270-
271- > [ !NOTE] Generating ` pdf ` output may take some time for tests with a lot of
272- > permutations.
273-
274254### Beyond commit comparisons
275255
276256While our automated A/B-Testing suite only supports A/B-Tests across commit
@@ -279,26 +259,37 @@ arbitrary environment (such as comparison how the same Firecracker binary
279259behaves on different hosts).
280260
281261For this, run the desired tests in your environments using ` devtool ` as you
282- would for a non-A/B test. The only difference to a normal test run is you should
283- set two environment variables: ` AWS_EMF_ENVIRONMENT=local ` and
284- ` AWS_EMF_NAMESPACE=local ` :
262+ would for a non-A/B test. This will produce ` test_results ` directories which
263+ will contain ` metrics.json ` files for each run test.
264+
265+ The ` tools/ab_test.py ` script can find and use these ` metrics.json ` files in the
266+ provided directories to compare runs:
285267
286268``` sh
287- AWS_EMF_ENVIRONMENT=local AWS_EMF_NAMESPACE=local tools/devtool -y test -- integration_tests/performance/test_boottime.py::test_boottime
269+ tools/ab_test.py analyze < path to A ` test_results ` > < path to B ` test_results ` >
288270```
289271
290- This instructs ` aws_embedded_metrics ` to dump all data series that our A/B-Test
291- orchestration would analyze to ` stdout ` , and pytest will capture this output
292- into a file stored at ` ./test_results/test-report.json ` .
272+ This will then print the same analysis described in the previous sections.
293273
294- The ` tools/ab_test.py ` script can consume these test reports, so next collect
295- your two test report files to your local machine and run
274+ #### Visualization
275+
276+ To create visualization of A/B runs use ` tools/ab_plot.py ` script. It supports
277+ creating ` pdf ` and ` table ` outputs using same ` metrics.json ` files used by
278+ ` tools/ab_test.py ` . Example usage:
296279
297280``` sh
298- tools/ab_test .py analyze < first test-report.json > < second test-report.json >
281+ ./ tools/ab_plot .py < path to A ` test_results ` > < path to B ` test_results ` > --output_type pdf
299282```
300283
301- This will then print the same analysis described in the previous sections.
284+ Alternatively using ` devtool ` running the script in the dev container with
285+ pre-installed dependencies.
286+
287+ ``` sh
288+ ./tools/devtool sh ./tools/ab_plot.py < path to A ` test_results` > < path to B ` test_results` > --output_type pdf
289+ ```
290+
291+ > [ !NOTE] Generating ` pdf ` output may take some time for tests with a lot of
292+ > permutations.
302293
303294#### Troubleshooting
304295
0 commit comments