On 10/8/2020 11:53 AM, Jeff King wrote: > On Thu, Oct 08, 2020 at 03:04:39PM +0000, Derrick Stolee via GitGitGadget wrote: >> @@ -2023,17 +2023,27 @@ static void sort_and_scan_merged_commits(struct write_commit_graph_context *ctx) >> >> if (i && oideq(&ctx->commits.list[i - 1]->object.oid, >> &ctx->commits.list[i]->object.oid)) { >> - die(_("unexpected duplicate commit id %s"), >> - oid_to_hex(&ctx->commits.list[i]->object.oid)); >> + /* >> + * Silently ignore duplicates. These were likely >> + * created due to a commit appearing in multiple >> + * layers of the chain, which is unexpected but >> + * not invalid. We should make sure there is a >> + * unique copy in the new layer. >> + */ > > You mentioned earlier checking tha the metadata for the duplicates was > identical. How hard would that be to do here? I do think it is a bit tricky, since we would need to identify from these duplicates which commit-graph layers they live in, then compare the binary data in each row (for tree, date, generation) and also logical data (convert parent int-ids into oids). One way to do this would be to create distinct 'struct commit' objects (do not use lookup_commit()) but finding the two positions within the layers is the hard part. At this point, any disagreement between rows would be corrupt data in one or the other, and it should be caught by the 'verify' subcommand. It definitely would be caught by 'verify' in the merged layer after the 'write' completes. At this point, we don't have any evidence that whatever causes the duplicate rows could possibly write the wrong data to the duplicate rows. I'll keep it in mind as we look for that root cause. Thanks, -Stolee