Re: Inefficiency of partial shallow clone vs shallow clone + "old-style" sparse checkout

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




28.03.2020, 19:58, "Derrick Stolee" <stolee@xxxxxxxxx>:
> On 3/28/2020 10:40 AM, Jeff King wrote:
>>  On Sat, Mar 28, 2020 at 12:08:17AM +0300, Konstantin Tokarev wrote:
>>
>>>  Is it a known thing that addition of --filter=blob:none to workflow
>>>  with shalow clone (e.g. --depth=1) and following sparse checkout may
>>>  significantly slow down process and result in much larger .git
>>>  repository?
>
> In general, I would recommend not using shallow clones in conjunction
> with partial clone. The blob:none filter will get you what you really
> want from shallow clone without any of the downsides of shallow clone.

Is it really so?

As you can see from my measurements [1], in my case simple shallow clone (1)
runs faster than simple partial clone (2) and produces slightly smaller .git,
from which I can infer that (2) downloads some data which is not downloaded
in (1).

To be clear, use case which I'm interested right now is checking out sources in
cloud CI system like GitHub Actions for one shot build. Right now checkout usually
takes 1-2 minutes and my hope was that someday in the future it would be possible\
to make it faster.

[1] https://gist.github.com/annulen/835ac561e22bedd7138d13392a7a53be

-- 
Regards,
Konstantin




[Index of Archives]     [Linux Kernel Development]     [Gcc Help]     [IETF Annouce]     [DCCP]     [Netdev]     [Networking]     [Security]     [V4L]     [Bugtraq]     [Yosemite]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Linux SCSI]     [Fedora Users]

  Powered by Linux