On 2019.07.26 15:03, Josh Steadmon wrote: [snip] > [ajv-cli] can validate the full 1.7M line trace output in just over a > minute. Moreover, it has helpful output when validation fails. So I > would be happy to re-implement this using ajv-cli. Unfortunately, ajv on Travis is much slower than on my work machine. It still takes over 10 minutes to complete, and is killed by Travis since it doesn't provide any progress indicator while it's running. How would people feel about validating a sample of the "make test" output? In the short term we could just use command-line tools to sample the trace file; for the long-term, we could add a sampling config option for trace2 (I've been considering adding this for other reasons anyway). Ideally we would want the sample to be deterministic for any given commit, so that we don't end up with flaky tests if changes are made to trace2 while neglecting to update the schema. Since there have been some suggestions to build a standalone test and verify its trace output, let me reiterate why I feel it's useful to use "make test" instead: I do not feel that I can create a standalone test that exercises a wide enough selection of code paths to get sufficient coverage of all potential trace2 output. Trying to make a standalone test that also anticipates future development is practically impossible. Using "make test" means that I can rely on the whole community to identify important code paths, both now and in the future. As always, I am open to other approaches to make sure the schema stays up-to-date.