Re: [PATCH v3] fetch: delay fetch_if_missing=0 until after config

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Jonathan Tan <jonathantanmy@xxxxxxxxxx> writes:

> Suppose, from a repository that has ".gitmodules", we clone with
> --filter=blob:none:
>
>   git clone --filter=blob:none --no-checkout \
>     https://kernel.googlesource.com/pub/scm/git/git
>
> Then we fetch:
>
>   git -C git fetch
>
> This will cause a "unable to load config blob object", because the
> fetch_config_from_gitmodules() invocation in cmd_fetch() will attempt to
> load ".gitmodules" (which Git knows to exist because the client has the
> tree of HEAD) while fetch_if_missing is set to 0.
>
> fetch_if_missing is set to 0 too early - ".gitmodules" here should be
> lazily fetched.  Git must set fetch_if_missing to 0 before the fetch
> because as part of the fetch, packfile negotiation happens (and we do
> not want to fetch any missing objects when checking existence of
> objects)...

Is it only me to feel that this is piling band-aids on top of
band-aids?

Perhaps the addition (and enabling) of lazy-fetch should have been
done after "checking existence" calls are vetted and sifted into two
categories?  Some accesses to objects are "because we need it
now---so let's lazily fetch if that is available as a fallback
option to us", as opposed to "if we do not have it locally right
now, we want to know the fact".  And each callsite should be able to
declare for what reason between the two it is making the access.

The single fetch-if-missing boolean may have been a quick-and-dirty
way to get the ball rolling, but perhaps the codebase grew up enough
so that it is time to wean off of it?



[Index of Archives]     [Linux Kernel Development]     [Gcc Help]     [IETF Annouce]     [DCCP]     [Netdev]     [Networking]     [Security]     [V4L]     [Bugtraq]     [Yosemite]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Linux SCSI]     [Fedora Users]

  Powered by Linux