You rock. A few comments: In the long run, I'd like to check that the expected output matches the actual output, rather than just checking for the presence of output; we could do this by storing expected warning/error output in a file matching the test but with a .err extension. That won't stop me from committing this test suite implementation in the meantime, though. I don't know for sure, since many of the test suite files don't include documentation, but I believe that some of them (like the preprocessor tests) only run as far as sparse -E, and they don't actually compile as C. Thus, they shouldn't get listed under BAD_EXP_OK. For these, I'd eventually like to check both the preprocessed output and the warning/error output against the expected values. Several tests appear miscategorized in this test suite. For example, struct-ns2.c includes a comment saying "This is not in scope and should barf loudly.", and includes code which should indeed cause sparse to barf loudly, but sparse does not output anything when run on that file, even with -Wall. Conversely, on initializer-entry-defined-twice.c, sparse actually *should* report a problem here (two problems, in fact), and it does; it just shouldn't report a problem on the third. Also, builtin_safe1.c falls in the category of "doesn't currently pass", as it warns in two places it shouldn't, in addition to the various places it should; however, this would require checking the actual warning/error output. Figuring out which of these various tests should pass and which should fail will require a fair bit of work. Ideally, I'd love to see the test suite find new tests automatically, rather than needing to list them explicitly; with a warning/error output file for each test, the only difference between an OK_EXP_OK and a BAD_EXP_BAD becomes "does the .err file have any content", so that just leaves the known failures, which we can list explicitly since we don't need to make it easy to add more. Pavel Roskin wrote: > --- /dev/null > +++ b/validation/testsuite > @@ -0,0 +1,118 @@ > +#! /bin/sh > + > +set -e At first, this seemed wrong, since sparse should exit with a non-zero exit code if it emits any errors; however, I then realized that sparse doesn't actually *do* this, which seems like a bug. > +: ${SPARSE=../sparse} > +: ${SPARSE_FLAGS=} I don't think the test suite should allow setting sparse flags, partly since changing the flags may well cause a test to pass or fail when it shouldn't, and partly since at some point different parts of the test suite might need different flags. > +for t in $BAD_EXP_OK; do > + $SPARSE $SPARSE_FLAGS $t 2>test-err > + if test -s test-err; then > + echo "UNEXPECTED PASS: $t" > + unexp=1 > + else > + echo "(known) FAIL: $t" > + fi > +done > + > +for t in $OK_EXP_BAD; do > + $SPARSE $SPARSE_FLAGS $t 2>test-err > + if test -s test-err; then > + echo "(known) FAIL: $t" > + else > + echo "UNEXPECTED PASS: $t" > + unexp=1 > + fi > +done At first I balked a bit at the idea of having the test suite succeed while some tests don't actually pass, but more I thought about it, I started to agree that it seems like the right approach. It means that we can ensure that the test suite passes in every single Git revision of sparse. Once all the tests run as expected, we can dump this part of the test suite code. - Josh Triplett
Attachment:
signature.asc
Description: OpenPGP digital signature