Re: Strategy to deal with slow cloners

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Konstantin Ryabitsev <konstantin@xxxxxxxxxxxxxxxxxxx> wrote:
> Hello:
> 
> I try to keep repositories routinely repacked and optimized for clones, in
> hopes that most operations needing lots of objects would be sending packs
> straight from disk. However, every now and again a client from a slow
> connection requests a large clone and then takes half a day downloading it,
> resulting in gigabytes of RAM being occupied by a temporary pack.

Yeah, I'm familiar with the problem.

> Are there any strategies to reduce RAM usage in such cases, other than
> vm.swappiness (which I'm not sure would work, since it's not a sleeping
> process)? Is there a way to write large temporary packs somewhere to disk
> before sendfile'ing them?

public-inbox-httpd actually switched buffering strategies in
2019 to favor hitting ENOSPC instead of ENOMEM :)

  https://public-inbox.org/meta/20190629195951.32160-11-e@xxxxxxxxx/

It doesn't support sendfile, currently (I didn't want separate
HTTPS vs HTTP code paths), but that's probably not too big of a
deal, especially with slow clients.

It's capable of serving non-public-inbox coderepos (and running
cgit).  Instead of configuring every [coderepo "..."] manually,
publicinbox.cgitrc can be set in ~/.public-inbox/config to
mass-configure [coderepo] sections.  It's only lightly-tested
for my setup atm, though.

Mapping publicinbox.<name>.coderepo to [coderepo "..."]
entries for solver (blob reconstruction) isn't required;
it's a bit of a pain at a large scale and I haven't figured
out how to make it easier.



[Index of Archives]     [Linux Kernel Development]     [Gcc Help]     [IETF Annouce]     [DCCP]     [Netdev]     [Networking]     [Security]     [V4L]     [Bugtraq]     [Yosemite]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Linux SCSI]     [Fedora Users]

  Powered by Linux