On Thu, Nov 19 2020, Jeff Hostetler wrote: > On 11/18/20 11:01 PM, Matheus Tavares wrote: >> Hi, Jeff >> >> On Mon, Nov 16, 2020 at 12:19 PM Jeff Hostetler <git@xxxxxxxxxxxxxxxxx> wrote: >>> >>> I can't really speak to NFS performance, but I have to wonder if there's >>> not something else affecting the results -- 4 and/or 8 core results are >>> better than 16+ results in some columns. And we get diminishing returns >>> after ~16. >> >> Yeah, that's a good point. I'm not sure yet what's causing the >> diminishing returns, but Geert and I are investigating. Maybe we are >> hitting some limit for parallelism in this scenario. > > I seem to recall back when I was working on this problem that > the unzip of each blob was a major pain point. Combine this > long delta-chains and each worker would need multiple rounds of > read/memmap, unzip, and de-delta before it had the complete blob > and could then smudge and write. > > This makes me wonder if repacking the repo with shorter delta-chains > affects the checkout times. And improves the perf when there are > more workers. I'm not saying that this is a solution, but rather > an experiment to see if it changes anything and maybe adjust our > focus. I've had part success with "git gc --keep-largest-pack" / gc.bigPackThreshold=N where N is at least the size you get from a fresh "git clone" when on NFS. It has the effect of essentially implementing a version of what you're suggesting, but in an arguably better way. Your initial clone will have whatever depth of chains you have, but all new objects pulled down will go into new packs/objects that won't share chains with that old big pack. So your repository will be bigger overall, but your old/new pack/pack(s) will eventually come to mostly reflect a cold/hot object storage. So what you need from a pack is more likely to already have been fetched into the FS cache, and over an NFS mount those requests may have been pre-fetched/fetched already. You can also more effectively warm the local OS cache by cat-ing >/dev/null the pack-files that aren't the big large pack on-login or whatever. >> >>> I'm wondering if during these test runs, you were IO vs CPU bound and if >>> VM was a problem. >> >> I would say we are more IO bound during these tests. While a sequential >> linux-v5.8 checkout usually uses 100% of one core in my laptop's SSD, >> in this setup, it only used 5% to 10%. And even with 64 workers (on a >> single core), CPU usage stays around 60% most of the time. >> >> About memory, the peak PSS was around 1.75GB, with 64 workers, and the >> machine has 10GB of RAM. But are there other numbers that I should keep >> an eye on while running the test? >> >>> I'm wondering if setting thread affinity would help here. >> >> Hmm, I only had one core online during the benchmark, so I think thread >> affinity wouldn't impact the runtime. > > I wasn't really thinking about the 64 workers on 1 core case. I was > more thinking about the 64 workers on 64 cores case and wondering > if workers were being randomly bounced from core to core and we were > thrashing.