On Tue, Oct 02, 2007 at 08:10:28AM +0200, David Kastrup wrote: > I have not actually looked at the actual task that the structures are > going to be used in, and whether "reusing" the information is likely > to be worth the trouble. The algorithm is something like this: We have N files, and we want to find "similar" candidates. So we go through each file and generate a table of fingperint hashes (diffcore-rename.c:hash_chars), and then compare each file with every other file, using the hash tables to do the comparison. So the comparison step for two files is currently something like: for each hash in file1 hash2 = look up hash in file2 compare hash and hash2 and if they were sorted, perhaps we could do something merge-like: while hashes are left to compare compare file1.next, file2.next advance file1, file2, or both (depending on comparison) > When we are talking about buzzword compliance, "keep sorted" with the > meaning of "maintain sorted across modifications" has an O(n^2) or at > least O(nm) ring to it. However, if it is possible to sort it just > once, and then then only merge with other lists... It would be sort once. I.e.,: for each file generate file.hashes sort file.hashes for each file1 for each file2 compare file1.hashes to file2.hashes where that 'compare' step is taking most of the CPU time (for the obvious reason that we call it in an O(n^2) loop). I will try to implement this as time permits, but if you want to tinker with it in the meantime, feel free. -Peff - To unsubscribe from this list: send the line "unsubscribe git" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html