Re: Optimizing cloning of a high object count repository

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Saturday 13 December 2008 16:46:50 you wrote:
[...]
> >  The size of the linux repository seems to be smaller but in the same
> > range object count and repository size but clones are much much faster.
> > Is there any way to optimize the server operations like counting and
> > compressing of objects to get the same speed as we get from
> > git.kernel.org (which does it in nearly no time and the only limiting
> > factor seems to be my bandwith)?
> >  The only other information I have is that Robin H. Johnson made a single
> >  ~910MiB pack for the whole repository.
>
> Make yearly packed repository snapshots and publish them via http.
> People can wget the latest snapshot, then pull updates later.
That would be a workaround but it doesn't explain why git.kernel.org deliveres 
torvalds repository without any notable counting and compressing time. Maybe 
it has something todo with the config I found inside the repository:
http://git.overlays.gentoo.org/gitroot/exp/gentoo-x86.git/config
It says that it isnt a bare repository.
Before I forget. I was wrong that it is a single 910mb file. Somebody seems to 
have repacked it into 7 single packs.

Regards,
	Resul
--
To unsubscribe from this list: send the line "unsubscribe git" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html

[Index of Archives]     [Linux Kernel Development]     [Gcc Help]     [IETF Annouce]     [DCCP]     [Netdev]     [Networking]     [Security]     [V4L]     [Bugtraq]     [Yosemite]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Linux SCSI]     [Fedora Users]

  Powered by Linux