Optimizing cloning of a high object count repository

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,
there are currently different ideas to move gentoo's cvs repository to an 
other scm. Current tests showed that svn will not make anything better (it 
gets in most perfomance and size based benchmarks even worse). Another idea is 
to move to git. It looks really promising in size based benchmarks but cloning 
seems nearly impossible. The current test repository is available at 
git://git.overlays.gentoo.org/exp/gentoo-x86.git and is around 900MB in size 
and has 4696137 objects. It really takes ages to do the counting of the 
objects on the server and compressing takes much longer.
The size of the linux repository seems to be smaller but in the same range 
object count and repository size but clones are much much faster. Is there any 
way to optimize the server operations like counting and compressing of objects 
to get the same speed as we get from git.kernel.org (which does it in nearly 
no time and the only limiting factor seems to be my bandwith)?
The only other information I have is that Robin H. Johnson made a single 
~910MiB pack for the whole repository.

Thx in advance,
	Resul
--
To unsubscribe from this list: send the line "unsubscribe git" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html

[Index of Archives]     [Linux Kernel Development]     [Gcc Help]     [IETF Annouce]     [DCCP]     [Netdev]     [Networking]     [Security]     [V4L]     [Bugtraq]     [Yosemite]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Linux SCSI]     [Fedora Users]

  Powered by Linux