tl;dr Bumping the limit seems like a good idea to me. On Sat, Jul 10, 2021 at 05:28:43PM -0700, Elijah Newren wrote: > * My colleagues happily raised merge.renameLimit beyond 32767 when the > artificial cap was removed. 10 minute waits were annoying, but much > less so than having to manually cherry-pick commits (especially given > the risk of getting it wrong).[13] One tricky thing here is that waiting 10 minutes may be worth it _if the rename detection finds something_. If it doesn't, then it's just annoying. I do think progress meters help a bit there, because then at least the user understands what's going on. I'll go into more detail in the sub-thread there. :) > ==> Arguments for bumping MODERATELY higher: > > * We have bumped the limits twice before (in 2008 and 2011), both times > stating performance as the limiting factor. Processors are faster > today than then.[14,15] Yeah, it is definitely time to revisit the default numbers. I think at one point we talked about letting it run for N wallclock seconds before giving up, but we've been hesitant to introduce that kind of time-based limit, because it ends up with non-deterministic results (plus you don't realize you're not going to finish until you've already wasted a bunch of time, whereas the static limits can avoid even beginning the work). > * Peff's computations for performance in the last two bumps used "the > average size of a file in the linux-2.6 repository"[16], for which I > assume average==mean, but the file selected was actually ~2x larger > than the mean file size according to my calculations[17]. > [...] > [17] Calculated and compared as follows (num files, mean size, size Peff used): > $ git ls-tree -rl v2.6.25 | wc -l > 23810 > $ git ls-tree -rl v2.6.25 | awk '{sum += $4} END{print sum/23810}' > 11150.3 > $ git show v2.6.25:arch/m68k/Kconfig | wc -c > 20977 I don't remember my methodology at this point, but perhaps it was based on blobs in the graph, not just one tree, like: $ git rev-list --objects v2.6.25 | git cat-file --batch-check='%(objecttype) %(objectsize) %(rest)' | awk ' /^blob/ { sum += $2; total += 1 } END { print sum / total } ' 27535.8 I suspect the difference versus a single tree is that there is a quadratic-ish property going on with file size: the bigger the file, the more likely it is to be touched (so total storage is closer to bytes^2). Looking at single-tree blob sizes is probably better though, as rename detection will happen between two single trees. > * I think the median file size is a better predictor of rename > performance than mean file size, and median file size is ~2.5x smaller > than the mean[18]. There you might get hit with the quadratic-update thing again, though. The big files are more likely to be touched, so could be weighted more (though are they more likely to have been added/delete/renamed? Who knows). I don't think file size matters all _that_ much, though, as it has a linear relationship to time spent. Whereas the number of entries is quadratic. And of course the whole experiment is ball-parking in the first place. We're looking for order-of-magnitude approximations, I'd think. > * The feedback about the limit is better today than when we last changed > the limits, and folks can configure a higher limit relatively easily. > Many already have. I can't remember the last time I saw the limit kick in in practice, but then I don't generally work with super-large repos (and my workflows typically do not encourage merging across big segments of history). Nor do I remember the topic coming up on the list after the last bump. So maybe that means that people are happily bumping the limits themselves via config. But I don't think that's really an argument against at least a moderate bump. If it helps even a few people avoid having to learn about the config, that's time saved. And it's a trivial code change on our end. -Peff