Re: Compressing packed-refs

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Fri, Jul 17, 2020 at 02:27:23AM -0400, Jeff King wrote:
> > Which really just indicates how much duplicated data is in that 
> > file. If reftables will eventually replace refs entirely, then we 
> > probably shouldn't expend too much effort super-optimizing it, 
> > especially if I'm one of the very few people who would benefit from 
> > it. However, I'm curious if a different sorting strategy would help 
> > remove most of the duplication without requiring too much 
> > engineering time.
> 
> You definitely could store it in a more efficient way. Reftables will
> have most of the things you'd want: prefix compression, binary oids,
> etc.  I wouldn't be opposed to a tweak to packed-refs in the meantime if
> it was simple to implement. But definitely we'd want to retain the
> ability to find a subset of refs in sub-linear time. That might get
> tricky and push it from "simple" to "let's just invest in reftable".

I'm fine either way, but a better approach seems to wait for reftables 
to land.

> You might also consider whether you need all of those refs at all in the
> object storage repo. The main uses are:
> 
>   - determining reachability during repacks; but you could generate this
>     on the fly from the refs in the individual forks (de-duplicating as
>     you go). We don't do this at GitHub, because the information in the
>     duplicates is useful to our delta-islands config.
>   - getting new objects into the object store. It sounds like you might
>     do this with "git fetch", which does need up-to-date refs. We used
>     to do that, too, but it can be quite slow. These days we migrate the
>     objects directly via hardlinks, and then use "update-ref --stdin" to
>     sync the refs into the shared storage repo.

This is definitely interesting to me, as git fetch runs into objstore 
repos do take a long time, even after I move them into a "lazy" thread.  
It's not so much a problem for git.kernel.org where pushes come in 
sporadically, but for CAF, their automation usually pushes several 
hundred repo updates at the same time, and the subsequent fetch into 
objstore takes several hours to complete.

Can you elaborate on the details of that operation, if it's not secret 
sauce? Say, I have two repos:

repoA/objects/
repoS/objects/

does this properly describe the operation:

1. locate all pack/* and XX/* files in repoA/objects (what about the 
   info/packs file, or do you loosen all packs first?)
2. hardlink them into the same location in repoS/objects
3. use git-show-ref from repoA to generate stdin for git-update-ref in 
   repoS
4. Consequent runs of repack in repoA should unreference the hardlinked 
   files in repoA/objects and leave only their copy in repoS

I'm not sure I'm quite comfortable doing this kind of spinal surgery on 
git repos yet, but I'm willing to wet my feet in some safe environments.  
:)

>   - advertising alternate ref tips in receive-pack (i.e., saying "we
>     already know about object X" if it's in somebody else's fork, which
>     means people pulling from Linus and then pushing to their fork don't
>     have to send the objects again). You probably don't want to
>     advertise all of them (just sifting the duplicates is too
>     expensive). We use core.alternateRefsCommand to pick out just the
>     ones from the parent fork. We _do_ still use the copy of the refs in
>     our shared storage, not the ones in the actual fork. But that's
>     because we migrate objects to shared storage asynchronously (so it's
>     possible for one fork to have refs pointing to objects that aren't
>     yet available to the other forks).

Yes, I did ponder using this, especially when dealing with objstore 
repos with hundreds of thousands of refs -- thanks for another nudge in 
this direction. I am planning to add a concept of indicating "baseline" 
repos to grokmirror, which allows us to:

1. set them as islandCore in objstore repositories
2. return only their refs via alternateRefsCommand

This one seems fairly straightforward and I will probably add that in 
next week.

> So it's definitely not a no-brainer, but possibly something to 
> explore.

Indeed -- at this point I'm more comfortable letting git itself do all 
object moving, but as I get more familiar with how object-storage repos 
work, I can optimize various aspects of it.

Thanks for your help!

-K



[Index of Archives]     [Linux Kernel Development]     [Gcc Help]     [IETF Annouce]     [DCCP]     [Netdev]     [Networking]     [Security]     [V4L]     [Bugtraq]     [Yosemite]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Linux SCSI]     [Fedora Users]

  Powered by Linux