Re: [RFC 0/3] dumb HTTP transport speedups

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



W dniu 2016-07-11 o 22:51, Eric Wong pisze:

> TL;DR: dumb HTTP clone from a certain badly-packed repo goes from
> ~2 hours to ~30 min memory usage drops from 2G to 360M
> 
> 
> I hadn't packed the public repo at https://public-inbox.org/git
> for a few weeks.  As an admin of a small server limited memory
> and CPU resources but fairly good bandwidth, I prefer clients
> use dumb HTTP for initial clones.

Hopefully the solution / workaround for large initial clone
problem utilizing bundles (`git bundle`), which can be resumably
transferred, would get standarized and automated.

Do you use bitmap indices for speeding up fetches?

BTW. IMVHO the problem with dumb HTTP is the latency, not extra
bandwidth needed...

Best,
-- 
Jakub Narębski

--
To unsubscribe from this list: send the line "unsubscribe git" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html



[Index of Archives]     [Linux Kernel Development]     [Gcc Help]     [IETF Annouce]     [DCCP]     [Netdev]     [Networking]     [Security]     [V4L]     [Bugtraq]     [Yosemite]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Linux SCSI]     [Fedora Users]