Jeff King <peff@xxxxxxxx> writes: > Signed-off-by: Jeff King <peff@xxxxxxxx> > --- > This was another patch from late in the freeze period. It was in > response to a user getting confused about why rename detection wasn't > happening in a large merge. Is it appropriate to print this for every > rename we try? Or should it just be for merges? > > Perhaps we should also bump the default limit from 100, which I think > was just arbitrarily chosen. > ... > + if ((num_create > rename_limit && num_src > rename_limit) || > + (num_create * num_src > rename_limit * rename_limit)) { > + warning("too many files, skipping inexact rename detection"); > goto cleanup; > + } > > mx = xmalloc(sizeof(*mx) * num_create * num_src); > for (dst_cnt = i = 0; i < rename_dst_nr; i++) { This reminds me of the 6d24ad9 (Optimize rename detection for a huge diff) topic that reduces the above allocation greatly. Some benching with the patch may prove useful to establish much higher limits, I suspect. -- To unsubscribe from this list: send the line "unsubscribe git" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html