Jeff King <peff@xxxxxxxx> writes: > ... speaking of > which, I need to polish that now that the streaming series seems to have > settled... I merely put it on the back burner, not necessarily declaring it as "settled". I just created a pair of test repository with a single 800M binary junk file. One repository has it as a single loose object file, and the other repository has it in a packfile. The path is not explicitly marked with any attributes, but I have no CRLF funniness configured, so streaming would be used but there won't be any filtering involved. Removing the file from the working tree and then checking it out of the index gave me a pleasant surprise. I originally did this only to help smaller machines that cannot comfortably fit inflated blob data in core, but it turns out that streaming write seems to perform better. These are the typical /usr/bin/time output: $ /usr/bin/time git checkout a ;# w/o streaming 1.39user 2.23system 0:03.62elapsed 99%CPU (0avgtext+0avgdata 6297056maxresident)k 0inputs+1572872outputs (0major+399828minor)pagefaults 0swaps $ /usr/bin/time git checkout a ;# w/ streaming 1.35user 1.22system 0:02.52elapsed 101%CPU (0avgtext+0avgdata 3151536maxresident)k 0inputs+1572872outputs (0major+203226minor)pagefaults 0swaps So now I can say I am reasonably happy with the series, but I haven't really exercised the code heavily, so there may still be bugs lurking ;-). -- To unsubscribe from this list: send the line "unsubscribe git" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html