On Mon, Aug 26, 2013 at 10:35:01AM -0700, Jonathan Nieder wrote: > > I don't see that splitting it up more hurts this. If we wanted more > > automatic rearranging or skipping of tests, we would need tests to > > declare dependencies on their setup. And we would need to be able to > > declare dependencies on multiple tests, since having a single setup test > > does not work in all cases (e.g., sometimes you are testing each step, > > and the final step relies on earlier steps). > > Actually dependencies can already be inferred for most test scripts > using the following rule: > > Each test depends on all tests with the word "setup" or the > phrase "set up" in their title that precede it. I'd be very surprised if this works in practice on most of our current test scripts. There are often subtle dependencies on the state left over from previous tests. Running the script below up through t3800 (at which point I lost patience) reveals 37 test scripts that are broken. Which is only about 17%, but we're clearly not quite there yet. But sure, I agree that is something to strive for. But I think my point stands; splitting up the setup doesn't hurt as long as you note all of the tests as dependencies. And by your rule above, the advice would be "each of these tests should say "setup" in the description". That makes sense to me, and I don't mind doing it here. > Of course splitting up the setup into 3 steps neither helps nor hurts > that. What I was complaining about is splitting the filter-branch > from the verification that filter-branch had the right result. I totally missed your original point, then. I don't mind combining the last two into a single test. That makes sense to me (I only split them at all because I added the second one much later during the development). -Peff -- >8 -- #!/usr/bin/perl # # Run as "perl foo.pl t0000-basic.sh" (or whatever script you want to # check). my $script = shift; my ($script_num) = $script =~ /^(t\d+)/; # run once to get the list of tests my @tests = run_tests($script); # mark some tests as "setup" tests that will always be run foreach my $test (@tests) { if ($test->{desc} =~ /set.?up/i) { print STDERR "marking $test->{num} - $test->{desc} as setup\n"; $test->{setup} = 1; } } # now try each test in isolation, but including setup tests foreach my $test (@tests) { $ENV{GIT_SKIP_TESTS} = skip_all_except($script_num, $test, @tests); run_tests($script) or die "failed $test->{num} ($test->{desc})\n"; } sub run_tests { my @r; open(my $fh, '-|', qw(sh), $script); while (<$fh>) { /^ok (\d+) - (.*)/ and push @r, { num => $1, desc => $2 }; } $? and return (); return @r; } sub skip_all_except { my $prefix = shift; my $want = shift; return join(' ', map { "$prefix.$_->{num}" } grep { !$_->{setup} && $_->{num} != $want->{num} } @_); } -- To unsubscribe from this list: send the line "unsubscribe git" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html