On Fri, 18 Mar 2005 14:25:10 -0600, Chris Adams <cmadams@xxxxxxxxxx> wrote: > Some regression tests are designed to be run in the build tree, so > building a package is probably not the best way to do this (the OpenSSH > tests are this way for example). Actually, every one that I can think of using has been invoked via make on the source tree... The question then should be, is that really the most correct way to go about testing? I think the reason we see it done that way today is that putting the testing into the makefile is pretty expedient for the developer rather than the code is being compiled with special testing shims (such tests would be of reduced usefulness when were talking about complete system testing anyways, since the shims might conceal compiler bugs for example). It would be useful for packages to use a standardized regression testing framework (something to account for the tests, handle optional/mandatory logic, perform random input testing if requested, etc).. I haven't gone looking, but it seems like something that probably already exists. > A better idea would be for the %build or %install stage (as appropriate) > to optionally run such tests, so if they fail, the RPM doesn't build. Well the performance implications of doing the regression tests for every build may be considerable. ... I wouldn't want to discourage packages from having a nice exhaustive and time consuming set of tests. It would be nice to decouple compiling from testing... For example, if a user has a machine that is misbehaving it might be useful ask them to through it in a regression testing loop overnight with the hope of finding a more consistent way to trigger the bug. Separate regression tests could find their way into the policies of companies and even into purchasing contracts "server must be able complete the entire Fedora Core 5 regression suite, the PostGresSQL regression test must complete within 3 hours" etc.. This could only be a good thing.