On Fri, 5 Nov 2021 at 19:11, Junio C Hamano <gitster@xxxxxxxxx> wrote: > > Adam Dinwoodie <adam@xxxxxxxxxxxxx> writes: > > > This is probably a much broader conversation. I remember when I first > > started packaging Git for Cygwin, I produced a release that didn't > > have support for HTTPS URLs due to a missing dependency in my build > > environment. The build and test suite all passed -- it assumed I just > > wanted to build a release that didn't have HTTPS support -- so some > > relatively critical function was silently skipped. I don't know how to > > avoid that sort of issue other than relying on (a) user bug (or at > > least missing function) reports and (b) folk building Git for > > themselves/others periodically going through the output of the > > configure scripts and the skipped subtests to make sure only expected > > things get missed; neither of those options seem great to me. > > I agree with you that there needs a good way to enumerate what the > unsatisfied prerequisites for a particular build are. That would > have helped in your HTTPS situation. > > But that is a separate issue how we should determine a lazy > prerequisite for any feature is satisified. > > "We have this feature that our code utilizes. If it is not working > correctly, then we can expect our code that depends on it would not > work, and it is no use testing" is what the test prerequisite system > tries to achieve. That is quite different from "the frotz feature > could work here as we see a binary /usr/bin/frotz installed, so > let's go test our code that depends on it---we'll find out if the > installed frotz is not what we expect, or way too old to help our > code, as the test will break and let us notice." I can see how they're separate problems, but they seem related to me. If OpenSSH were not installed on my system, Git would be compiled without this function and the tests would be skipped. If OpenSSH is installed but the prerequisite check fails, Git will be compiled with the function, but the tests will be skipped. In the first case, function some users might depend on will be missing; in the second, the function will be nominally present but we won't be sure it's actually working as expected. Both issues would be avoided if the tests were always run, because suddenly both sorts of silent failure become noisy. I'm not actually advocating that -- running all tests all the time would clearly cause far more problems than it would solve! -- but that's why I'm seeing these as two sides of the same coin, and problems that might have a single shared solution.