On Wed, Jan 21, 2015 at 03:23:40PM -0800, Stefan Beller wrote: > Signed-off-by: Stefan Beller <sbeller@xxxxxxxxxx> > --- > t/t1400-update-ref.sh | 28 ++++++++++++++++++++++++++++ > 1 file changed, 28 insertions(+) > > diff --git a/t/t1400-update-ref.sh b/t/t1400-update-ref.sh > index 6805b9e..ea98b9b 100755 > --- a/t/t1400-update-ref.sh > +++ b/t/t1400-update-ref.sh > @@ -1065,4 +1065,32 @@ test_expect_success 'stdin -z delete refs works with packed and loose refs' ' > test_must_fail git rev-parse --verify -q $c > ' > > +run_with_limited_open_files () { > + (ulimit -n 32 && "$@") > +} > + > +test_lazy_prereq ULIMIT 'run_with_limited_open_files true' We already have a ULIMIT prereq in t7004 that does something similar but different. The two do not conflict as long as they are in separate scripts, but they would if one gets moved into test-lib.sh. Should we maybe give these more descriptive names? It is not just about "ulimit", but the individual limit option. I can imagine a platform where "ulimit -s" works, but "ulimit -n" does not (or the other way around). I almost also suggested that the two "ulimit -s" instances share the same function and lazy prereq, but I think that's probably not a good idea. One cares about limiting the stack, and the other care about limiting the cmdline size. The latter _happens_ to be done using "ulimit -s". That works on Linux, but no clue about elsewhere. I could easily imagine a platform where there is some other way, and we add a run-time switch. > +test_expect_failure ULIMIT 'large transaction creating branches does not burst open file limit' ' > +( > + for i in $(seq 33) Use test_seq here, for portability. > +test_expect_failure ULIMIT 'large transaction deleting branches does not burst open file limit' ' > +( > + for i in $(seq 33) Ditto here. The rest of the tests looked good to me. -Peff -- To unsubscribe from this list: send the line "unsubscribe git" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html