On Tue, Jul 13, 2021 at 6:10 PM Junio C Hamano <gitster@xxxxxxxxx> wrote: > > Elijah Newren <newren@xxxxxxxxx> writes: > > > The exhaustiveness of the quadratic portion comes from comparing each > > file to more other files, not in using a different type of comparison. > > Exactly. By culling potential matches early with heuristics, we > make a trade-off of risking false-negatives but save a lot of cycles > while trying to find "renames with modifications (which is what we > called 'inexact rename')", and my comment equated fewer false-negatives > with more precision. Okay, I think I'm following what you're saying now, mostly, but I'm curious about the false negative comment. Am I mixing up negatives/positives (as I'm prone to do), or would it be more correct to say the new algorithm risks suboptimal positives rather than that it risks false negatives? In particular, the new algorithm will compare files with the same basename and just accept that pairing if they are similar enough, even if there might be a better match elsewhere. However, a lack of a match in same-basenamed files will not cause those files to have no match; they will instead just be included in the exhaustive detection portion, so we can still detect renames for such paths.