On Tue, Dec 01, 2020 at 03:55:00PM -0500, Derrick Stolee wrote: > > Since Stolee is on the cc and has already seen me complaining about his > > test, I guess I should expand a bit. ;) > > Ha. I apologize for causing pain here. My thought was that GIT_TEST_LONG=1 > was only used by someone really willing to wait, or someone specifically > trying to investigate a problem that only triggers on very large cases. > > In that sense, it's not so much intended as a frequently-run regression > test, but a "run this if you are messing with this area" kind of thing. > Perhaps there is a different pattern to use here? No, I think your interpretation is pretty reasonable. I definitely do not run with GIT_TEST_LONG normally, but was only poking at it because of the other discussion. > > Though speaking of which, another easy win might be setting > > core.compression to "0". We know the random data won't compress anyway, > > so there's no point in spending cycles on zlib. > > The intention is mostly to expand the data beyond two gigabytes, so > dropping compression to get there seems like a good idea. If we are > not compressing at all, then perhaps we can reliably cut ourselves > closer to the 2GB limit instead of overshooting as a precaution. Probably, though I think at best we could save 20% of the test cost. I didn't play much with it. > Cutting out 70% out seems like a great idea. I don't think it was super > intentional to slow down those tests. Here it is as a patch (the numbers are slightly different this time because I used my usual ramdisk, but the overall improvement is roughly the same). I didn't look into whether it should be cleaning up (test 16 takes longer, too, because of the extra on-disk bytes). I also noticed while running with "-v" that it complains of corrupted refs. I assume this is leftover cruft from earlier tests, and I didn't dig into it. But it may be something worth cleaning up for somebody more familiar with these tests. -- >8 -- Subject: [PATCH] t7900: speed up expensive test A test marked with EXPENSIVE creates two 2.5GB files and adds them to the repository. This takes 194s to run on my machine, versus 2s when the EXPENSIVE prereq isn't set. We can trim this down a bit by doing two things: - use "git commit --quiet" to avoid spending time generating a diff summary (this actually only helps for the second commit, but I've added it here to both for consistency). This shaves off 8s. - set core.compression to 0. We know these files are full of random bytes, and so won't compress (that's the point of the test!). Spending cycles on zlib is pointless. This shaves off 122s. After this, my total time to run the script is 64s. That won't help normal runs without GIT_TEST_LONG set, of course, but it's easy enough to do. Signed-off-by: Jeff King <peff@xxxxxxxx> --- t/t7900-maintenance.sh | 6 ++++-- 1 file changed, 4 insertions(+), 2 deletions(-) diff --git a/t/t7900-maintenance.sh b/t/t7900-maintenance.sh index d9e68bb2bf..d9a02df686 100755 --- a/t/t7900-maintenance.sh +++ b/t/t7900-maintenance.sh @@ -239,13 +239,15 @@ test_expect_success 'incremental-repack task' ' ' test_expect_success EXPENSIVE 'incremental-repack 2g limit' ' + test_config core.compression 0 && + for i in $(test_seq 1 5) do test-tool genrandom foo$i $((512 * 1024 * 1024 + 1)) >>big || return 1 done && git add big && - git commit -m "Add big file (1)" && + git commit -qm "Add big file (1)" && # ensure any possible loose objects are in a pack-file git maintenance run --task=loose-objects && @@ -257,7 +259,7 @@ test_expect_success EXPENSIVE 'incremental-repack 2g limit' ' return 1 done && git add big && - git commit -m "Add big file (2)" && + git commit -qm "Add big file (2)" && # ensure any possible loose objects are in a pack-file git maintenance run --task=loose-objects && -- 2.29.2.894.g2dadb8c6b8