On Mon, Aug 26, 2019 at 12:04 PM Jeff King <peff@xxxxxxxx> wrote: > > On Mon, Aug 26, 2019 at 10:16:48AM -0400, randall.s.becker@xxxxxxxxxx wrote: > > > On August 24, 2019 5:00 PM, Bryan Turner wrote: > > > On Fri, Aug 23, 2019 at 6:59 PM <randall.s.becker@xxxxxxxxxx> wrote: > > > > > > > > Hi All, > > > > > > > > I'm trying to answer a question for a customer on clone performance. > > > > They are doing at least 2-3 clones a day, of repositories with about > > > > 2500 files and 10Gb of content. This is stressing the file system. > > > > > > Can you go into a bit more detail about what "stress" means? Using too > > > much disk space? Too many IOPS reading/packing? Since you specifically > > > called out the filesystem, does that mean the CPU/memory usage is > > > acceptable? > > > > The upstream is BitBucket, which does a gc frequently. I'm not sure > > any of this is relating to the pack structure. Git is spending most of > > its time writing the large number of large files into the working > > directory - it is stress mostly the disk, with a bit on the CPU > > (neither is acceptable to the customer). I am really unsure there is > > any way to make things better. The core issue is that the customer > > insists on doing a clone for every feature branch instead of using > > pull/checkout. I have been unable to change their mind - to this point > > anyway. > > Yeah, at the point of checkout there's basically no impact from anything > the server is doing or has done (technically it could make things worse > for you by returning a pack with absurdly long delta chains or > something, but that would be CPU and not disk stress). > > I doubt there's much to optimize in Git here. It's literally just > writing files to disk as quickly as it can, and it sounds like disk > performance is your bottleneck. Well, if it's just checkout, Stolee's sparse-checkout series he just posted may be of interest to them...once it's polished up and included in git, of course.