Jakub Narębski <jnareb@xxxxxxxxx> wrote: > W dniu 2016-07-11 o 22:51, Eric Wong pisze: > > > TL;DR: dumb HTTP clone from a certain badly-packed repo goes from > > ~2 hours to ~30 min memory usage drops from 2G to 360M > > > > > > I hadn't packed the public repo at https://public-inbox.org/git > > for a few weeks. As an admin of a small server limited memory > > and CPU resources but fairly good bandwidth, I prefer clients > > use dumb HTTP for initial clones. > > Hopefully the solution / workaround for large initial clone > problem utilizing bundles (`git bundle`), which can be resumably > transferred, would get standarized and automated. I've been hoping to look at this more in coming weeks/months. It would be nice if bundles and packs could be unified somehow to avoid doubling storage on the server. > Do you use bitmap indices for speeding up fetches? Yes, but slow clients are still a problem since big responses keeps memory-hungry processes running while trickling (or waste disk space buffering the pack output up front) Static packfiles/bundles are nice since all the clients can share the same data on the server side as it's trickled out. > BTW. IMVHO the problem with dumb HTTP is the latency, not extra > bandwidth needed... I enabled persistent connections for 404s on loose objects for this reason :) We should probably be doing it across the board on 404s, just haven't gotten around to it... Increasing default parallelism should also help; but might hurt some servers which can't handle many connections... Hard to imagine people using antiquated prefork servers for slow clients in a post-Slowloris world, but maybe it happens? -- To unsubscribe from this list: send the line "unsubscribe git" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html