Re: Strategy to deal with slow cloners

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Mon, Apr 19 2021, Konstantin Ryabitsev wrote:

> Hello:
>
> I try to keep repositories routinely repacked and optimized for clones, in
> hopes that most operations needing lots of objects would be sending packs
> straight from disk. However, every now and again a client from a slow
> connection requests a large clone and then takes half a day downloading it,
> resulting in gigabytes of RAM being occupied by a temporary pack.
>
> Are there any strategies to reduce RAM usage in such cases, other than
> vm.swappiness (which I'm not sure would work, since it's not a sleeping
> process)? Is there a way to write large temporary packs somewhere to disk
> before sendfile'ing them?

Aside from any Git-specific solutions, perhaps the right kernel settings
+ a cron script re-nicing such processes that have been active for more
than X amount of time will help?

I'm not familiar with the guts of Linux's swapping algorithm, but some
results online seem to suggest that it takes the nice level into account
when deciding what to swap out, i.e. with the right level it might give
preference to swapping out this mostly idle process.



[Index of Archives]     [Linux Kernel Development]     [Gcc Help]     [IETF Annouce]     [DCCP]     [Netdev]     [Networking]     [Security]     [V4L]     [Bugtraq]     [Yosemite]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Linux SCSI]     [Fedora Users]

  Powered by Linux