On Thu, Oct 12, 2023 at 09:17:09AM -0700, Junio C Hamano wrote: > Patrick Steinhardt <ps@xxxxxx> writes: > > > Wouldn't this have the potential to significantly regress performance > > for all those preexisting users of the `--missing` option? The commit > > graph is quite an important optimization nowadays, and especially in > > commands where we potentially walk a lot of commits (like we may do > > here) it can result in queries that are orders of magnitudes faster. > > The test fails only when GIT_TEST_COMMIT_GRAPH is on, which updates > the commit-graph every time a commit is made via "git commit" or > "git merge". > > I'd suggest stepping back and think a bit. > > My assumption has been that the failing test emulates this scenario > that can happen in real life: > > * The user creates a new commit. > > * A commit graph is written (not as part of GIT_TEST_COMMIT_GRAPH > that is not realistic, but as part of "maintenance"). > > * The repository loses some objects due to corruption. > > * Now, "--missing=print" is invoked so that the user can view what > are missing. Or "--missing=allow-primisor" to ensure that the > repository does not have missing objects other than the ones that > the promisor will give us if we asked again. > > * But because the connectivity of these objects appear in the > commit graph file, we fail to notice that these objects are > missing, producing wrong results. If we disabled commit-graph > while traversal (an earlier writing of it was perfectly OK), then > "rev-list --missing" would have noticed and reported what the > user wanted to know. > > In other words, the "optimization" you value is working to quickly > produce a wrong result. Is it "significantly regress"ing if we > disabled it to obtain the correct result? It depends, in my opinion. If: - Wrong results caused by the commit graph are only introduced with this patch series due to the changed behaviour of `--missing`. - We disable commit graphs proactively only because of the changed behaviour of `--missing`. Then yes, it does feel wrong to me to disable commit graphs and regress performance for usecases that perviously worked both correct and fast. > My assumption also has been that there is no point in running > "rev-list --missing" if we know there is no repository corruption, > and those who run "rev-list --missing" wants to know if the objects > are really available, i.e. even if commit-graph that is out of sync > with reality says it exists, if it is not in the object store, they > would want to know that. > > If you can show me that it is not the case, then I may be pursuaded > why producing a result that is out of sync with reality _quickly_, > instead of taking time to produce a result that matches reality, is > a worthy "optimization" to keep. Note that I'm not saying that it's fine to return wrong results -- this is of course a bug that needs to be addressed somehow. After all, things working correctly should always trump things working fast. But until now it felt more like we were going into the direction of disabling commit graphs without checking whether there is an alternative solution that allows us to get the best of both worlds, correctness and performance. So what I'm looking for in this thread is a reason why we _can't_ have that, or at least can't have it without unreasonable amounts of work. We have helpers like `lookup_commit_in_graph()` that are designed to detect stale commit graphs by double-checking whether a commit that has been looked up via the commit graph actually exists in the repository. So I'm wondering whether this could help us address the issue. If there is a good reason why all of that is not possible then I'm happy to carve in. Patrick
Attachment:
signature.asc
Description: PGP signature