Glen Choo <chooglen@xxxxxxxxxx> writes: > I would have come to same conclusion if I agreed that we should recurse > into submodules even if no objects are fetched. When I first wrote this > patch, I was convinced that "no new objects" implies "no need to update > submodules" (see my response at [1]), but I'm not sure any more and I'd > like to check my understanding. For example, there is a "everything_local()" optimization in the fetch-pack codepath where the following steps happen: (1) we look at the current value of the remote refs we are fetching from their ls_refs output (let's call it "new tips"); (2) we notice that all of these objects happen to exist in our object store. (3) we make sure that we do not see any "missing links" if we run a reachability traversal that starts from these objects and our existing refs, and stops when the traversal intersect. When the last step finds that all objects necessary to point at these "new tips" with our refs safely, then we have no reason to perform physical transfer of objects. Yet, we'd update our refs to the "new tips". This can happen in a number of ways. Imagine that you have a clone of https://github.com/git/git/ for only its 'main' branch (i.e. a single-branch clone). If you then say "git fetch origin maint:maint", we'll learn that the tip of their 'maint' branch points at a commit, we look into our object store, find that there is no missing object to reach from it to the part of the object graph that is reachable from our refs (i.e. my 'maint' is always an ancestor of my 'main'), and we find that there is no reason to transfer any object. Yet we will carete a new ref and point at the commit. Or if you did "git branch -d" locally, making objects unreachable in your object store, and then fetch from your upstream, which had fast forwarded to the contents of the branch you just deleted. Or they rewound and rebuilt their branches since you fetched the last time, and then they realized their mistake and now their refs point at a commit that you have already seen but are different from what your remote-tracking branches point at now. Or you are using Derrick's "prefetch" (in "git maintenance run") and a background process already downloaded the objects needed for the branch you are fetching in the past. Depending on what happened when these objects were pre-fetched, such a real fetch that did not have to perform an object transfer may likely to need to adjust things in the submodule repository. "prefetch" is designed not to disrupt and to be invisible to the normal operation as much as possible, so I would expect that it won't do any priming of the submodules based on what it prefetched for the superproject, for example. So in short, physical object transfer can be optimized out, even when the external world view, i.e. where in the history graph the refs point at, changes and makes it necessary to check in the submodule repositories.