Re: What's cooking in git.git (topics)

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Jeff King <peff@xxxxxxxx> writes:

> On Tue, Oct 02, 2007 at 08:10:28AM +0200, David Kastrup wrote:

[...]

> The algorithm is something like this: We have N files, and we want
> to find "similar" candidates. So we go through each file and
> generate a table of fingperint hashes
> (diffcore-rename.c:hash_chars), and then compare each file with
> every other file, using the hash tables to do the comparison.
>
> So the comparison step for two files is currently something like:
>
>   for each hash in file1
>     hash2 = look up hash in file2
>     compare hash and hash2
>
> and if they were sorted, perhaps we could do something merge-like:
>
>   while hashes are left to compare
>       compare file1.next, file2.next
>       advance file1, file2, or both (depending on comparison)
>
>> When we are talking about buzzword compliance, "keep sorted" with
>> the meaning of "maintain sorted across modifications" has an O(n^2)
>> or at least O(nm) ring to it.  However, if it is possible to sort
>> it just once, and then then only merge with other lists...
>
> It would be sort once. I.e.,:
>
>   for each file
>      generate file.hashes
>      sort file.hashes
>   for each file1
>     for each file2
>       compare file1.hashes to file2.hashes
>
> where that 'compare' step is taking most of the CPU time (for the
> obvious reason that we call it in an O(n^2) loop).
>
> I will try to implement this as time permits, but if you want to
> tinker with it in the meantime, feel free.

This does not actually require an actual merge _sort_ AFAICS: do the
"sort file.hashed" step using qsort.  The comparison step does not
actually need to produce merged output, but merely advances through
two hash arrays and generates statistics.

This should already beat the pants off the current implementation,
even when the hash array is sparse, simply because our inner loop then
has perfect hash coherence.

Getting rid of this outer O(n^2) remains an interesting challenge,
though.  One way would be the following: fill a _single_ array with
entries containing _both_ hash and file number.  Sort this, and then
gather the statistics of hash runs by making a single pass through.
That reduces the O(n^2) behavior to only those parts with actual hash
collisions.

-- 
David Kastrup

-
To unsubscribe from this list: send the line "unsubscribe git" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html

[Index of Archives]     [Linux Kernel Development]     [Gcc Help]     [IETF Annouce]     [DCCP]     [Netdev]     [Networking]     [Security]     [V4L]     [Bugtraq]     [Yosemite]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Linux SCSI]     [Fedora Users]

  Powered by Linux