Jeff King <peff@xxxxxxxx> writes: > One further devil's advocate: > > If people really _do_ care about coverage, arguably the AFL tests are a > pollution of that concept. Because they are running the code, but doing > a very perfunctory job of testing it. IOW, our coverage of "code that > doesn't segfault or trigger ASAN" is improved, but our coverage of "code > that has been tested to be correct" is not (and since the tests are > lumped together, it's hard to get anything but one number). > > So I dunno. I remain on the fence about the patch. Yeah, I have been disturbed by your earlier remark "binary test cases that nobody, not even the author, understands", and the above summarizes it more clearly. Continuously running fuzzer tests on the codebase would have value, but how exactly are these fuzzballs generated? Don't they depend on the code being tested? IOW, how effective is a set of fuzzballs that forces the code to take more branches in the current codepath for the purpose of testing new code that updates the codepath, changing the structure of the codeflow? Unless a new set of fuzzballs to match the updated codeflow is generated, wouldn't the test coverage with these fuzzballs erode over time, making them less and less useful baggage we carry around, without nobody noticing that they no longer are effective to help test coverage?