On Tue, Sep 14, 2021 at 11:37:06AM -0400, Jeff King wrote: > The limit here is fairly arbitrary, and probably much higher than anyone > would need in practice. It might be worth limiting it further, if only > because we check it linearly (so with "m" local refs and "n" patterns, > we do "m * n" string comparisons). But if we care about optimizing this, > an even better solution may be a more advanced data structure anyway. The limit I picked is 65536, because it seemed round and high. But note that somebody can put up to almost-64k in a single ref-prefix line, which means ultimately you can allocate 4GB. I do wonder if dropping this to something like 1024 might be reasonable. In practice I'd expect it to be a handful in most cases (refs/heads/*, refs/tags/*, HEAD). But if you do something like: git fetch $remote 1 2 3 4 5 6 7 ... then we'll prefix-expand those names with the usual lookup rules into refs/1, refs/heads/1, refs/2, refs/heads/2, and so on. At some point it becomes silly and works counter to the purpose of the optimization (you send more prefix constraints than the actual ref advertisement, not to mention that client bandwidth may not be symmetric). I'm not sure what we want to declare as a reasonable limit. And this is just about protecting the server; probably it makes sense for the client to realize it's going to send a ridiculous number of prefixes and just skip the feature entirely (since that's what actually saves the bandwidth). -Peff