Re: Strategy to deal with slow cloners

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 19.04.2021 14:46, Konstantin Ryabitsev wrote:

> I try to keep repositories routinely repacked and optimized for clones, in
> hopes that most operations needing lots of objects would be sending packs
> straight from disk. However, every now and again a client from a slow
> connection requests a large clone and then takes half a day downloading it,
> resulting in gigabytes of RAM being occupied by a temporary pack.
> 
> Are there any strategies to reduce RAM usage in such cases, other than
> vm.swappiness (which I'm not sure would work, since it's not a sleeping
> process)? Is there a way to write large temporary packs somewhere to disk
> before sendfile'ing them?

There is the packfile-uris feature which allows protocol v2 servers to
advertise static packfiles via http/https. But clients must explicitly
enable it via fetch.uriprotocols. So this does only work for newish
clients which explicitly ask for it. See
Documentation/technical/packfile-uri.txt.

>From my limited understanding one clone/fetch the server can only send
one packfile at most.

What is the advertised git clone command on the website? Maybe something
like git clone --depth=$num would help reduce the load? Usually not
everyone needs the whole history.




[Index of Archives]     [Linux Kernel Development]     [Gcc Help]     [IETF Annouce]     [DCCP]     [Netdev]     [Networking]     [Security]     [V4L]     [Bugtraq]     [Yosemite]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Linux SCSI]     [Fedora Users]

  Powered by Linux