Re: Performance issue: initial git clone causes massive repack

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Before I answer the rest of your post, I'd like to note that the matter
of which choice between single-repo, repo-per-package, repo-per-category
has been flogged to death within Gentoo.

I did not come to the Git mailing list to rehash those choices. I came
here to find a solution to the performance problem. While it shows up
with our repo, I'm certain that we're not the only people with the
problem. The GSoC 2009 ideas contain a potential project for caching the
generated packs, which, while having value in itself, could be partially
avoided by sending suitable pre-built packs (if they exist) without any
repacking.

On Sun, Apr 05, 2009 at 05:54:53AM +0200, Nicolas Sebrecht wrote:
> > That causes incredibly bloat unfortunately.
> > 
> > I'll summarize why here for the git mailing list. Most our developers
> > have the entire tree checked out, and in informal surveys, would like to
> > continue to do so. There are ~13500 packages right now 
> Each developer doesn't work on so many packages, right ? From my point
> of view, checkin'out the entire tree is the wrong way on how to do
> things.
Also, I should note that working on the tree isn't the only reason to
have the tree checked out. While the great majority of Gentoo users have
their trees purely from rsync, there is nothing stopping you from using
a tree from CVS (anonCVS for the users, master CVS server for the
developers).

A quick bit of stats run show that while some developers only touch a
few packages, there are at least 200 developers that have done a major
change to 100 or more packages.

> > Without tail packing, the Gentoo tree is presently around 520MiB (you
> > can fit it into ~190MiB with tail packing). This means that
> > repo-per-package would have an overhead in the range of 400%.
> Don't know about the business for Gentoo, but HDD is cheap.
There's no reason to have bloat just for the layout to change.

> Also, I'd like to know how much space you will gain with the CVS to Git >
> migration.  How bigger is a CVS repo against a Git one ?
For the CVS checkouts right now: 
- ~410MiB of content (w/ 4kb inodes)
- ~240MiB of CVS overhead (w/ 4kb inodes)
(sorry about the earlier 520MiB number, I forgot to exclude a local dir
of stats data on my box when I ran du quickly).

Our experimental Git, with only a single repo for gentoo-x86:
- ~410MiB of content (w/ 4kb inodes)
- 80MiB - 1.6GiB of Git total overhead.

80MiB of overhead is the total overhead with a shallow clone at depth 1.
1.6GiB is with the full history.

And per-package numbers, because we DID do an experimental conversion,
last year, although the packs might not have been optimal:
- ~410MiB of content (w/ 4kb inodes)
- 4.7GiB of Git total overhead, with a breakdown:
  - 1.9GiB in inode waste
  - 2.8GiB in packs

> One repo per category could be a good compromise assuming one seperate
> branch per package, then.
Other downsides to repo-per-category and repo-per-package:
- Raises difficulty in adding a new package/category. 
  You cannot just do 'mkdir && vi ... && git add && git commit' anymore.
- The name of the directory for both of the category AND the package are not
  specified in the ebuild, as such, unless they are checked out to the right
  location, you will get breakage (definitely in the package name, and
  about 10% of the time with categories).
- You cannot use git-cvsserver with them cleanly and have the correct
  behavior (we DO have developers that want to use the CVS emulation
  layer) - adding a category or a package would NOT trigger the
  addition of a new repo on the server when needed.
- Does NOT present a good base for anybody wanting to branch the entire
  tree themselves.
  

> > Additionally, there's a lot of commonality between ebuilds and packages,
> > and having repo-per-package means that the compression algorithms can't
> > make use of it - dictionary algorithms are effective at compression for
> > a reason.
> Please, no. We are in the long term issues. Compression will be
> efficient. It's all about the content of the files and dictionary
> algorithms certainly will do a good job over the ebuilds revisions.
We're already on track to drop the CVS $Header$, and thereafter, some of the
ebuilds are already on track to be smaller. Here's our prototype dev-perl/Sub-Name-0.04.
====
# Copyright 1999-2009 Gentoo Foundation
# Distributed under the terms of the GNU General Public License v2
MODULE_AUTHOR=XMATH
inherit perl-module
DESCRIPTION="(re)name a sub"
LICENSE="|| ( Artistic GPL-2 )"
SLOT="0"
KEYWORDS="~amd64 ~x86"
IUSE=""
SRC_TEST=do
====

We can have all the CPAN packages from CPAN author XMATH, with changing
only the DESCRIPTION string. KEYWORDS then just changes over the package
lifespan.

-- 
Robin Hugh Johnson
Gentoo Linux Developer & Infra Guy
E-Mail     : robbat2@xxxxxxxxxx
GnuPG FP   : 11AC BA4F 4778 E3F6 E4ED  F38E B27B 944E 3488 4E85

Attachment: pgpx1X8Ujqasm.pgp
Description: PGP signature


[Index of Archives]     [Linux Kernel Development]     [Gcc Help]     [IETF Annouce]     [DCCP]     [Netdev]     [Networking]     [Security]     [V4L]     [Bugtraq]     [Yosemite]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Linux SCSI]     [Fedora Users]

  Powered by Linux