On Tue, Jan 07, 2025 at 09:47:43AM +0100, Patrick Steinhardt wrote: > On Mon, Jan 06, 2025 at 09:39:04PM -0500, Jeff King wrote: > > So I don't really see a way to do this robustly. > > I think I found a way, which goes back to the inital idea of just > generating heaps of submodules. My current version generates a submodule > "A" with a couple of recursive submodules followed by 2.5k additional > submodules, which overall generates ~150kB of data. This can be done > somewhat efficiently via git-hash-object-object(1) and git-mktree(1), > and things work with a sleep before and after the call to grep(1). Ah, of course. I was so lost in trying to find hacks that I forgot we could just actually convince it to send a lot of data. ;) Your solution looks nice. It's O(1) processes, since all of the heavy lifting is done by the long gitmodules file and tree. I was going to suggest that you could reduce the number of submodules by giving them large paths (or large checked-out branch names) to get more bytes of output per submodule. But there is not really much point. What you have should run quite quickly. > I'm a bit torn though. The required setup is quite complex, and I wonder > whether it is really worth it just to test this edge case. On the other > hand it is there to cover a recent fix in 082caf527e (submodule status: > propagate SIGPIPE, 2024-09-20), so losing the test coverage isn't all > that great, either. And keeping the race is not an option to me, either. > > So I'm inclined to go with the below version. WDYT? Yeah, I was tempted after my last email to suggest just ditching the test, too. :) But I think what you've written here is a good approach. I'll look carefully over what you sent in the v3 series. -Peff