Re: [PATCH v3 3/3] config: add '--show-origin' option to print the origin of a config value

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Sun, Feb 14, 2016 at 01:48:59PM +0100, Lars Schneider wrote:

> > I see you split this up more, but there's still quite a bit going on in
> > this one block. IMHO, it would be more customary in our tests to put the
> > setup into one test_expect_success block, then each of these
> > expect-run-cmp blocks into their own test_expect_success.
> > 
> > It does mean that the setup mutates the global test state for further
> > tests (and you should stop using test_config_*, which clean up at the
> > end of the block), but I think that's the right thing here. The point of
> > test_config is "flip on this switch just for a moment, so we can test
> > its effect without hurting further tests". But these are config tests in
> > the first place, and it is OK for them to show a progression of
> > mutations of the config (you'll note that like the other tests in this
> > script, you are clearing out .git/config in the first place).
> > 
> TBH I am always a little annoyed if Git tests depend on each other. It makes
> it harder to just disable all uninteresting tests and only focus on the one that 
> I am working with. However, I agree with your point that the test block does too
> many things. Would it be OK if I write a bash function that performs the test
> setup? Then I would call this function in the beginning of every individual 
> test. Or do you prefer the global state strategy?

In general, my opinion is that skipping arbitrary leading tests is a
losing strategy. It's just too easy to introduce hidden dependencies,
and not worth the programmer time to make sure each test runs in
isolation. But others on the list may disagree.

That being said, I think what I am proposing is a much milder form of
that. With what I am proposing, you can skip everything _except_ tests
which match /set.?up/ in their description. We do not perfectly adhere
to that in our tests, but I suspect it works a majority of the time.

If it is taking too long to get to a particular test in a test script,
maybe that is a sign we need to break up the script. There are also a
few tricks you can use to still _run_ the earlier blocks, but not have
them interfere with debugging a particular test:

  1. Use --verbose-only=123 to get verbose output only from a single
     test.

  2. Use "-i" to stop running tests at the first failure. Usually it is
     worth fixing that one, and then seeing if other tests fail, too, or
     were simply dependent.

  3. If you are using --valgrind, the tests run very slowly (normally
     t1300 takes 400ms to run on my machine, so I don't mind waiting
     that long to get to a new test at the end. With valgrind it is more
     like 90 seconds). You can use --valgrind-only=123 to test only the
     block you are debugging, and run the rest quickly.

We do use shell functions in some places to do repeated setup. In
general, I prefer setting up the global state. It's more efficient
(which does add up when running the whole suite), and I find it easier
to debug failing tests (it's just one less thing the failing block is
doing that you have to look at; and you can generally "cd" into the
leftover trash directory to investigate the global state).

-Peff
--
To unsubscribe from this list: send the line "unsubscribe git" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html



[Index of Archives]     [Linux Kernel Development]     [Gcc Help]     [IETF Annouce]     [DCCP]     [Netdev]     [Networking]     [Security]     [V4L]     [Bugtraq]     [Yosemite]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Linux SCSI]     [Fedora Users]