Re: Git Scaling: What factors most affect Git performance for a large repo?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Fri, 2015-02-20 at 12:59 -0800, Junio C Hamano wrote:
> David Turner <dturner@xxxxxxxxxxxxxxxx> writes:
> 
> > On Fri, 2015-02-20 at 06:38 +0700, Duy Nguyen wrote:
> >> >    * 'git push'?
> >> 
> >> This one is not affected by how deep your repo's history is, or how
> >> wide your tree is, so should be quick..
> >> 
> >> Ah the number of refs may affect both git-push and git-pull. I think
> >> Stefan knows better than I in this area.
> >
> > I can tell you that this is a bit of a problem for us at Twitter.  We
> > have over 100k refs, which adds ~20MiB of downstream traffic to every
> > push.
> >
> > I added a hack to improve this locally inside Twitter: The client sends
> > a bloom filter of shas that it believes that the server knows about; the
> > server sends only the sha of master and any refs that are not in the
> > bloom filter.  The client  uses its local version of the servers' refs
> > as if they had just been sent....
> 
> Interesting.
> 
> Care to extend the discussion to improve the protocol exchange,
> which starts at $gmane/263932 [*1*], where I list the known issues
> around the current protocol (and a possible way to correct them in
> footnotes)?

At Twitter, we changed to an entirely different clone strategy for our
largest repo: instead of using git clone, we use bittorrent (on a
tarball of the repo).  For git pull, we maintain a journal of all pushes
ever made to the server (data and ref updates); each client keeps track
of their location in that journal.  So now pull does not require any
computation on the server; the client just requests the segment of the
journal that they don't have.  Then the client replays the journal.
This scheme isn't perfect: clients end up with data about even
transitory and long-dead branches, and there is presently no way to
redact data (although that would be possible to add).  And of course
shallow and sparse clones are impossible.  But it works quite well for
Twitter's needs.  As I understand it, the hope is to implement redaction
and then submit patches upstream.

I say "we", but I personally did not do any of the above work.  Because
I haven't looked into most of these issues personally, I'm reluctant to
say too much on protocol improvements.  I would want to better
understand the constraints.  I do think there is value in having a
diversity of possible protocols to handle different use cases.  As
repositories grow, traditional full-repo clones become less viable.
Network transfer and client-side performance both suffer.  In a repo the
size of (say) WebKit, the traditional model works.  In a repo the size
of Facebook's monorepo, it starts to break down.  So Facebook does
entirely shallow clones (using hg, but the problems are similar in git).
Commands like log and blame instead call out to a server to gather
history data.  At Google, whose repo is I think two or three orders of
magnitude larger than WebKit, all local copies are both shallow and
sparse; there is also support for "sparse commits" -- so that a commit
that affects (say) ten thousand files across the entire tree can be kept
to a reasonable size. 

<end digression>

Twitter's journal scheme explains why I implemented bloom filter pushes
-- the number of refs does not significantly affect pull performance,
but pushes still go through the normal git machinery, so we wanted an
optimization to reduce latency there.




--
To unsubscribe from this list: send the line "unsubscribe git" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html




[Index of Archives]     [Linux Kernel Development]     [Gcc Help]     [IETF Annouce]     [DCCP]     [Netdev]     [Networking]     [Security]     [V4L]     [Bugtraq]     [Yosemite]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Linux SCSI]     [Fedora Users]