RE: [Question] clone performance

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On August 24, 2019 5:00 PM, Bryan Turner wrote:
> On Fri, Aug 23, 2019 at 6:59 PM <randall.s.becker@xxxxxxxxxx> wrote:
> >
> > Hi All,
> >
> > I'm trying to answer a question for a customer on clone performance.
> > They are doing at least 2-3 clones a day, of repositories with about
> > 2500 files and 10Gb of content. This is stressing the file system.
> 
> Can you go into a bit more detail about what "stress" means? Using too
> much disk space? Too many IOPS reading/packing? Since you specifically
> called out the filesystem, does that mean the CPU/memory usage is
> acceptable?

The upstream is BitBucket, which does a gc frequently. I'm not sure any of this is relating to the pack structure. Git is spending most of its time writing the large number of large files into the working directory - it is stress mostly the disk, with a bit on the CPU (neither is acceptable to the customer). I am really unsure there is any way to make things better. The core issue is that the customer insists on doing a clone for every feature branch instead of using pull/checkout. I have been unable to change their mind - to this point anyway.

We are going to be setting up a detailed performance analysis that may lead to some data the git team can use.

Regards,
Randall





[Index of Archives]     [Linux Kernel Development]     [Gcc Help]     [IETF Annouce]     [DCCP]     [Netdev]     [Networking]     [Security]     [V4L]     [Bugtraq]     [Yosemite]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Linux SCSI]     [Fedora Users]

  Powered by Linux