Re: Partial-clone cause big performance impact on server

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 8/11/22 4:09 AM, 程洋 wrote:> Hi.
>      We observed big disk space save by partial-clone and require all of our users (2000+) to clone repository with partial-clone (filter=blob:none)
>      However at busy time, we found it's extremely slow for user to fetch. Here is what we did.
> 
>     1. ask all users to fetch with filter=blob:none. And it's remarkable. Now our download size per user decrease from 460G to 180G.

I hope this includes the blob download during the initial checkout,
because otherwise you have a very strange shape to make your commits and
trees take up 180 GB.

>     2. But at busy time, everyone's fetch become slow. (at idle hours, it takes us 5 minutes to clone a big repositories, but it takes more than 1 hour to clone the same repositories at busy hours)
>     3. with GIT_TRACE_PACKET=1. We found on big repositories (200K+refs, 6m+ objects). Git will sends 40k want.

You only have six million objects in the repo and yet have that size? It
must be some very large blobs.

>     4. And we then track our server(which is gerrit with jgit). We found the server is couting objects. Then we check those 40k objects, most of them are blobs rather than commit. (which means they're not in bitmap)

Are you seeing any commits in these requests? If the Git client is asking
for blobs, then they should not be mixed with commit wants. What kind of
operation are you doing to see these mixed wants?

If the request was only blobs, then the server should not need a "Counting
objects" phase. It should jump immediately to preparing the objects (which
will likely require parsing deltas, and that can be expensive). I don't
know if JGit is doing something different, though.

>     5. We believe that's the root cause of our problem. Git sends too many "want SHA1" which are not in bitmap, cause the server to count objects  frequently, which then slow down the server.
> 
> What we want is, download the things we need to checkout to specific commit. But if one commit contain so many objects (like us , 40k+). It takes more time to counting than downloading.

One thing that the microsoft/git fork uses in its "git-gvfs-helper" tool
(which speaks the GVFS Protocol as a replacement for partial clone when
using Azure Repos as a server) is a batched download of missing objects [1].
The initial limit is 4000 objects at a time, but that helps keep each
request small enough that it is less likely to fail for scale reasons alone.

[1] https://github.com/microsoft/git/blob/vfs-2.37.1/gvfs-helper.c#L3510-L3520

It might be interesting to create such batch-downloads for these partial
clone blob-fetches.

Thanks,
-Stolee



[Index of Archives]     [Linux Kernel Development]     [Gcc Help]     [IETF Annouce]     [DCCP]     [Netdev]     [Networking]     [Security]     [V4L]     [Bugtraq]     [Yosemite]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Linux SCSI]     [Fedora Users]

  Powered by Linux