On Thu, Feb 13, 2020 at 1:40 PM Alexandr Miloslavskiy via GitGitGadget <gitgitgadget@xxxxxxxxx> wrote: > 3) Make `poll()` always reply "writable" for write end of the pipe > Afterall it seems that cygwin (accidentally?) does that for years. > Also, it should be noted that `pump_io_round()` writes 8MB blocks, > completely ignoring the fact that pipe's buffer size is only 8KB, > which means that pipe gets clogged many times during that single > write. This may invite a deadlock, if child's STDERR/STDOUT gets > clogged while it's trying to deal with 8MB of STDIN. Such deadlocks > could be defeated with writing less then pipe's buffer size per s/then/than/ > round, and always reading everything from STDOUT/STDERR before > starting next round. Therefore, making `poll()` always reply > "writable" shouldn't cause any new issues or block any future > solutions. > 4) Increase the size of the pipe's buffer > The difference between `BytesInQueue` and `QuotaUsed` is the size > of pending reads. Therefore, if buffer is bigger then size of reads, s/then/than/ > `poll()` won't hang so easily. However, I found that for example > `strbuf_read()` will get more and more hungry as it reads large inputs, > eventually surpassing any reasonable pipe buffer size. > diff --git a/t/t3903-stash.sh b/t/t3903-stash.sh > +test_expect_success 'stash handles large files' ' > + printf "%1023s\n%.0s" "x" {1..16384} >large_file.txt && > + git stash push --include-untracked -- large_file.txt > +' Use of {1..16384} is not portable across shells. You should be able to achieve something similar by assigning a really large value to a shell variable and then echoing that value to "large_file.txt". Something like: x=0123456789 x=$x$x$x$x$x$x$x$x$x$x x=$x$x$x$x$x$x$x$x$x$x ...and so on... echo $x >large_file.txt && or any other similar construct.