Re: [PATCH 0/7] Submodules and partial clones

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Tue, 29 Sep 2020 11:05:08 -0700
Jonathan Tan <jonathantanmy@xxxxxxxxxx> wrote:

> > I've been investigating what is required to get submodules and
> > partial clones to work well together.  The issue seems to be that
> > the correct repository is not passed around, so we sometimes end up
> > trying to fetch objects from the wrong place.
> > 
> > These patches don't make promisor_remote_get_direct handle different
> > repositories because I've not found a case where that is necessary.
> 
> Anything that reads a submodule object without spawning another
> process to do so (e.g. grep, which adds submodule object stores as
> alternates in order to read from them) will need to be prepared to
> lazy-fetch objects into those stores.

Yes, grep just calls `add_to_alternates_memory` and will be broken.

When handling nested submodules `config_from_gitmodules` does the same
thing, so that will also be broken if some of the .gitmodules files
need fetching.

Fixing these probably does require supporting fetching of objects from
submodules.

> > The patches rework various cases where objects from a submodule are
> > added to the object store of the main repository.  There are some
> > remaining cases where add_to_alternates_memory is used to do this,
> > but add_submodule_odb has been removed.
> > 
> > I expect there will be some remaining issues, but these changes
> > seem to be enough to get the basics working.  
> 
> What are the basics that work?

I've tried at least the following, in a repo with several submodules and
large objects (but no nested submodules):
- git clone --recursive --filter=blob:limit=1M ...
- git pull --rebase --recurse-submodules=on-demand
- git show --submodue=diff <commit-with-big-submodule-object>
- git push --recurse-submodules=check
- git push --recurse-submodules=on-demand

I used the partial clone for a while and didn't hit any problems, but I
can't say what (relevant) commands I might have used.

An important thing that I've not tried is a merge that needs to fetch
objects.  I should probably write a testcase for that.

> When I looked into this, my main difficulty lay in getting the
> lazy fetch to work in another repository. Now that lazy fetches are
> done using a separate process, the problem has shifted to being able
> to invoke run_command() in a separate Git repository. I haven't
> figured out the best way to ensure that run_command() is run with a
> clean set of environment variables (so no inheriting of GIT_DIR
> etc.), but that doesn't seem insurmountable.

Yes, I think that to fix promisor_remote_get_direct we need to:
- store the promisor configuration per-repository
- run the fetch process in the correct repository

AFAICT we just need to set cp.dir and call prepare_submodule_repo_env
to get the right environment for the fetch process. The per-repository
configuration looks more fiddly to do.  I'm happy to try and make these
additional changes (but it won't be quick as I'm busy with the day job).

In any case we need to pass the right repository around.



[Index of Archives]     [Linux Kernel Development]     [Gcc Help]     [IETF Annouce]     [DCCP]     [Netdev]     [Networking]     [Security]     [V4L]     [Bugtraq]     [Yosemite]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Linux SCSI]     [Fedora Users]

  Powered by Linux