On Sun, 12 Mar 2006, Junio C Hamano wrote: > > To reduce wasted memory, wait until the hash fills up more > densely before we rehash. This reduces the working set size a > bit further. Umm. Why do you rehash at all? Just take the size of the "src" file as the initial hash size. Also, I think it is likely really wasteful to try to actually hash at _each_ character. Instead, let's say that the chunk-size is 8 bytes (like you do now), and let's say that you have a 32-bit good hash of those 8 bytes. What you can do is: - for each 8 bytes in the source, hash those 8 bytes (not every byte) - for each byte in the other file, hash 8 next bytes. IF it matches a hash in the source with a non-zero count, subtract the count for that hash and move up by _eight_ characters! If it doesn't, add one to "characters not matched" counter, and move up _one_ character, and try again. At the end of this, you have two counts: the count of characters that you couldn't match in the other file, and the count of 8-byte hash-chunks that you couldn't match in the first one. Use those two counts to decide if it's close or not. Especially for good matches, this should basically cut your work into an eight of what you do now. Actually, even for bad matches, you cut the first source overhead into one eight (the second file will obviously do the "update by 1 byte" most of the time). Don't you think that would be as accurate as what you're doing now (it's the same basic notion, after all), and noticeably faster? Linus - : send the line "unsubscribe git" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html