Re: [PATCH 05/11] maintenance: add commit-graph task

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thu, 6 Aug 2020 at 18:50, Derrick Stolee via GitGitGadget
<gitgitgadget@xxxxxxxxx> wrote:
> diff --git a/Documentation/git-maintenance.txt b/Documentation/git-maintenance.txt
> index 089fa4cedc..35b0be7d40 100644
> --- a/Documentation/git-maintenance.txt
> +++ b/Documentation/git-maintenance.txt
> @@ -35,6 +35,24 @@ run::
>  TASKS
>  -----
>
> +commit-graph::
> +       The `commit-graph` job updates the `commit-graph` files incrementally,
> +       then verifies that the written data is correct. If the new layer has an
> +       issue, then the chain file is removed and the `commit-graph` is
> +       rewritten from scratch.
> ++
> +The verification only checks the top layer of the `commit-graph` chain.
> +If the incremental write merged the new commits with at least one
> +existing layer, then there is potential for on-disk corruption being
> +carried forward into the new file. This will be noticed and the new
> +commit-graph file will be clean as Git reparses the commit data from
> +the object database.

This reads somewhat awkwardly. I think what you mean is something like
"is there a risk for on-disk corruption? yes, but no: we're clever
enough to detect it and avoid it". So from a user's point of view, I
think this is too detailed.

How about

 The verification checks the top layer of the resulting `commit-graph` chain.
 This ensures that the maintenance task leaves the top layer in a shape
 that matches the object database, even if it was ostensibly constructed
 by "merging in" existing, incorrect layers.

? I don't quite like it -- it's a bit too technical -- but it describes
the end result which the user should care about -- on-disk data is
consistent and correct -- rather than how we got there.

Perhaps:

 It is ensured that the resulting "top layer" is correct. This should
 help avoid most on-disk corruption of the commit-graph and ensure
 that the commit-graph matches the object database.

Don't know whether that's entirely true though...

Food for thought, perhaps.

There's something probabilistic about this whole thing: If a low layer
is corrupt, you might "eventually" get to replace it. I suppose it could
make sense to go "verify the whole thing, drop however many top layers
we need to drop to get only correct layers, then generate a new layer
(and merge and whatnot) on top of that". But the proposed commit message
makes it fairly clear that this would have other drawbacks and that we
don't really expect corrupt layers in the first place.

Martin



[Index of Archives]     [Linux Kernel Development]     [Gcc Help]     [IETF Annouce]     [DCCP]     [Netdev]     [Networking]     [Security]     [V4L]     [Bugtraq]     [Yosemite]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Linux SCSI]     [Fedora Users]

  Powered by Linux