Re: [PATCH 1/1] ref-filter.c: find disjoint pattern prefixes

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Jacob,

On Wed, Jun 26, 2019 at 05:37:42PM -0700, Jacob Keller wrote:
> [ ... ]
>
> > Instead, we want to partition the patterns into disjoint sets, where we
> > know that no ref will be matched by any two patterns in different sets.
> > In the above, these are:
> >
> >   - {'refs/heads/a/*', 'refs/heads/a/b/c'}, and
> >   - {'refs/tags/v1.0.0'}
>
> Is this disjoint set calculation already existing, or did you have to
> add it in this patch?

Both the disjoint set calculation and the prefixing procedure are new in
this patch. But, we're never actually computing this disjoint set
explicitly, rather, we build it up implicitly while computing what will
become the longest prefixes of each subset.

> >   4. Otherwise, recurse on step (3) with the slice of the list
> >      corresponding to our current prefix (i.e., the subset of patterns
> >      that have our prefix as a literal string prefix.)
> >
> > This algorithm is 'O(kn + n log(n))', where 'k' is max(len(pattern)) for
> > each pattern in the list, and 'n' is len(patterns).
> >
>
> ok, so if we can assume that k is some relatively small constant
> number (since the maximum pattern length isn't likely to grow without
> bounds), this is O(n*log(n)) on the number of patterns, so we don't
> even approach n^2 even when we are given a large number of patterns.
> Nice!
>
> > By discovering this set of interesting patterns, we reduce the runtime
> > of multi-pattern 'git for-each-ref' (and other ref traversals) from
> > O(N) to O(n log(N)), where 'N' is the total number of packed references.
>
> So here, n is the number of patterns still? This seems like a pretty
> significant gane when we have a large number of packed references.

Yes, 'n' is the number of patterns given. For e.g., the invocation

  $ git for-each-ref 'refs/heads/*' 'refs/tags/*'

has 'n = 2', and 'N' is unknown. The asymptotics here are really
comparing the case where we previously didn't make any effort to compute
good queries, and resorted to a linear scan of all packed references,
compared to now where we have at most one query per pattern, resulting
in a logarithmic-time scan of .git/packed-refs.

> >
> > Running 'git for-each-ref refs/tags/a refs/tags/b' on a repository with
> > 10,000,000 refs in 'refs/tags/huge-N', my best-of-five times drop from:
> >
> >   real    0m5.805s
> >   user    0m5.188s
> >   sys     0m0.468s
> >
> > to:
> >
> >   real    0m0.001s
> >   user    0m0.000s
> >   sys     0m0.000s
> >
>
> That's a pretty significant decrease!

Yes, it's quite good here, but it's designed to be that way ;-). Like I
note below, the real world speed-ups aren't quite as remarkable, but
it's not uncommon for us at GitHub to have a repository of the above
shape in terms of the number of references.

So, it's an increase almost no matter where you are, but it works
especially well for us.

> > On linux.git, the times to dig out two of the latest -rc tags drops from
> > 0.002s to 0.001s, so the change on repositories with fewer tags is much
> > less noticeable.
> >
>
> This explains why it might not have been done before.. many
> repositories wouldn't benefit much.
>
> That said, the patch description doesn't make it seem very
> complicated. I did run out of time reading the message, so I'll have
> to follow up reviewing the actual change below later. I think the
> description of the goal and solution is sound though.

Thanks for the initial review :-).

Thanks,
Taylor



[Index of Archives]     [Linux Kernel Development]     [Gcc Help]     [IETF Annouce]     [DCCP]     [Netdev]     [Networking]     [Security]     [V4L]     [Bugtraq]     [Yosemite]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Linux SCSI]     [Fedora Users]

  Powered by Linux