Jeff King <peff@xxxxxxxx> writes: >> It may be (?) that it is a good time to think about a 'datedepth' >> capability to bypass the current counted-depth shallow fetch that can >> cause so much trouble. With a date limited depth the relevant tags >> could also be fetched. > > I don't have anything against such an idea, but I think it is orthogonal > to the issue being discussed here. Correct. The biggest problem with the "shallow" hack is that the deepening fetch counts from the tip of the refs at the time of deepening when deepening the history (i.e. "clone --depth", followed by number of "fetch", followed by "fetch --depth"). If you start from a shallow clone of depth 1, repeated fetch to keep up while the history grew by 100, you would still have a connected history down to the initial cauterization point, and "fetch --depth=200" would give you a history that is deeper than your original clone by 100 commits. But if you start from the same shallow clone of depth 1, did not do anything while the history grew by 100, and then decide to fetch again with "fetch --depth=20", it does not grow. It just makes 20-commit deep history from the updated tip, and leave the commit from your original clone dangling. The problem with "depth" does not have anything to do with how it is specified at the UI level. The end-user input is used to find out at what set of commits the history is cauterized, and once they are computed, the "shallow" logic solely works on "is the history before these cauterization points, or after, in topological terms." (and it has to be that way to make sure we get reproducible results). Even if a new way to specify these cauterization points in terms of date is introduced, it does not change anything and does not solve the fundamental problem the code has when deepening. -- To unsubscribe from this list: send the line "unsubscribe git" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html