Re: [PATCH 1/1] delete multiple tags in a single transaction

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thu, Aug 08, 2019 at 04:43:16PM -0700, Phil Hord wrote:

> > I also get really slow times on a repo with ~20,000 tags (though order
> > ~3 minutes rather than ~30, probably due to having an SSD on this
> > machine) -- but ONLY IF the refs are packed first (git pack-refs
> > --all).  If the refs are loose, it's relatively quick to delete a
> > dozen thousand or so tags (order of a few seconds).  It might be worth
> > mentioning in the commit message that this only makes a significant
> > difference in the case where the refs are packed.
> 
> I'm also using an SSD but I still see about 10 tags per second being
> deleted with the current code (and packed-refs).  I see that I'm
> CPU-bound, so I guess most of the time is spent searching through
> .git/packed-refs.  Probably it will run faster as it progresses. I
> guess the 18,000 branches in my repo keep me on the wrong end of O(N).

Right, deleting individually from packed-refs is inherently quadratic,
because each deletion has to rewrite the entire file. So if you delete
all (or the majority of them), that's O(n^2) individual entry writes.

The loose case is just touching the filesystem for each entry (and the
refs code is smart enough not to bother rewriting packed-refs if the
entry isn't present there). That _can_ be slow if you have a lot of
entries in the same directory (because some filesystems are particularly
bad at this).

So the actual backing storage speed isn't really that important. All the
time goes to copying the same packed-refs entries over and over, whether
they hit the disk or not.

Your solution (using a single transaction) is definitely the right one
(and probably should apply to "branch -d", too). That's what we did long
ago for update-ref, and I think nobody ever really noticed for the
porcelain commands because they don't tend to be used for such bulk
changes.

> But it should have occurred to me while I was in the code that there
> is a different path for unpacked refs which could explain my previous
> speeds.  I didn't think I had any unpacked refs, though, since every
> time I look in .git/refs for what I want, I find it relatively empty.
> I see 'git pack-refs --help' says that new refs should show up loose,
> but I can't say that has happened for me.  Maybe a new clone uses
> packed-refs for *everything* and only newly fetched things are loose.
> Is that it?  I guess since I seldom fetch tags after the first clone,
> it makes sense they would all be packed.

Right, a fresh clone always writes all of its entries as packed refs.
It used to be done by hand, but it happens in a special "initial
transaction" method these days, since 58f233ce1e
(initial_ref_transaction_commit(): function for initial ref creation,
2015-06-22).

> > In constrast, it appears that `git update-ref --stdin` is fast
> > regardless of whether the refs are packed, e.g.
> >    git tag -l feature/* | sed -e 's%^%delete refs/tags/%' | git
> > update-ref --stdin
> > finishes quickly (order of a few seconds).
> 
> Nice!  That trick is going in my wiki for devs to use on their VMs.
> Thanks for that.

Please do encourage people to use for-each-ref instead of the "tag -l"
porcelain, as the latter is subject to change. My usual bulk deletion
command is:

  git for-each-ref --format='delete %(refname)' refs/tags/feature/ |
  git update-ref --stdin

-Peff



[Index of Archives]     [Linux Kernel Development]     [Gcc Help]     [IETF Annouce]     [DCCP]     [Netdev]     [Networking]     [Security]     [V4L]     [Bugtraq]     [Yosemite]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Linux SCSI]     [Fedora Users]

  Powered by Linux