On Tue, 29 Jan 2008, Junio C Hamano wrote: > > I wonder if the second one for the overflow avoidance should be > using || instead of &&, though. No, we want to be able to handle the case where there is (for example) just one removed file, but lots of new ones. That's not expensive at all. So we don't want to require that *both* the counts for removed and new files are low, we really want to check that we don't have too many combinations together. But the if (rename_limit <= 0 || rename_limit > 32767) rename_limit = 32767; which is there purely to avoid overflow in 32-bit multiplication should probably be changed to be more reasonable. We'll never want to try to do a matrix that is really 32k * 32k in size, even if we can calculate its size ;) So maybe we should just make that hard limit more reasonable. 100x100 was too small, but a 1000x1000 matrix might be acceptable. Or, better yet (which was what I was hoping for originally), we'd just make the inexact rename detection be linear-size/time rather than O(m*n). But those patches never really came together, so we do need to limit it more aggressively. Linus - To unsubscribe from this list: send the line "unsubscribe git" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html