On Wed, Feb 16, 2022 at 05:34:23PM -0800, Elijah Newren wrote: > On Mon, Feb 14, 2022 at 1:32 AM Patrick Steinhardt <ps@xxxxxx> wrote: > > > > When fetching references from a remote we by default also fetch all tags > > which point into the history we have fetched. This is a separate step > > performed after updating local references because it requires us to walk > > over the history on the client-side to determine whether the remote has > > announced any tags which point to one of the fetched commits. > > > > This backfilling of tags isn't covered by the `--atomic` flag: right > > now, it only applies to the step where we update our local references. > > This is an oversight at the time the flag was introduced: its purpose is > > to either update all references or none, but right now we happily update > > local references even in the case where backfilling failed. > > > > Fix this by pulling up creation of the reference transaction such that > > we can pass the same transaction to both the code which updates local > > references and to the code which backfills tags. This allows us to only > > commit the transaction in case both actions succeed. > > > > Note that we also have to start passing the transaction into > > `find_non_local_tags()`: this function is responsible for finding all > > tags which we need to backfill. Right now, it will happily return tags > > which we have already been updated with our local references. But when > > we use a single transaction for both local references and backfilling > > then it may happen that we try to queue the same reference update twice > > to the transaction, which consequentially triggers a bug. We thus have > > to skip over any tags which have already been queued. Unfortunately, > > this requires us to reach into internals of the reference transaction to > > access queued updates, but there is no non-internal interface right now > > which would allow us to access this information. > > I like the changes you are making here in general, but I do agree that > reaching into refs-internal feels a bit icky. I'm not familiar with > the refs API nor the fetching code, so feel free to ignore these > ideas, but I'm just throwing them out there as possibilities to avoid > reaching into refs-internal: > > - you are trying to check for existing transactions to avoid > duplicates, but those existing transactions came from elsewhere in the > same code we control. Could we store a strset or strmap of the items > being updated (in addition to storing them in the transaction), and > then use the strset/strmap to filter out which tags we need to > backfill? Or would that require plumbing an extra variable through an > awful lot of callers to get the information into the right places? We basically would need to plumb through the variable to most callsites which also get the transaction as input, and that's rather deep into the callstack. The reason I think it's preferable to instead use the transaction is that it holds the definitive state of all updates we have already queued, and thus we cannot accidentally forget to update another auxiliary variable. > - would it make sense to add a flag to the transaction API to allow > duplicates if both updates update the ref to the same value? (I'm > guessing you're updating to the same value, right?) It should be the same value, yes. There is a race though in the context of tag backfilling: if the initial fetch pulls in some tags, then it can happen that second fetch used in some cases for the backfilling mechanism pulls in the same tag references but with different target objects. It's an unlikely thing to happen, but cannot be ruled out a 100%. As Jonathan pointed out the backfilling-fetch is only used when the transport does not use "include-tag" though. The result in that case would be that the transaction aborts because of duplicate addition of the same ref with different values. And I'd say that this is correct behaviour in case the user asked for an atomic fetch. > - should we just add something to the refs API along the lines of > "transaction_includes_update_for()" or something like that? I think something in the spirit of this last option would be the easiest solution. Using `includes_updates_for()` or the above solution of a flag which avoids duplicate updates would potentially be quadratic in behaviour though if implemented naively: we need to walk all queued updates for each of the tags we want to queue. That's easy enough to avoid if we just add a `for_each_queued_reference_update()` and then continue to do the same thing like we below. It also gives us greater flexibility compared to the other alternatives. > [...] > > @@ -361,12 +362,28 @@ static void find_non_local_tags(const struct ref *refs, > > const struct ref *ref; > > struct refname_hash_entry *item = NULL; > > const int quick_flags = OBJECT_INFO_QUICK | OBJECT_INFO_SKIP_FETCH_OBJECT; > > + int i; > > > > refname_hash_init(&existing_refs); > > refname_hash_init(&remote_refs); > > create_fetch_oidset(head, &fetch_oids); > > > > for_each_ref(add_one_refname, &existing_refs); > > + > > + /* > > + * If we already have a transaction, then we need to filter out all > > + * tags which have already been queued up. > > + */ > > + for (i = 0; transaction && i < transaction->nr; i++) { > > + if (!starts_with(transaction->updates[i]->refname, "refs/tags/") || > > + !(transaction->updates[i]->flags & REF_HAVE_NEW)) > > + continue; > > + (void) refname_hash_add(&existing_refs, > > + transaction->updates[i]->refname, > > + &transaction->updates[i]->new_oid); > > Why the typecast here? `refname_hash_add()` returns the newly added entry, and we don't care about that. `add_one_refname()` has the same cast, potentially to demonstrate that we don't need the return value? Compilers shouldn't care I think, but on the other hand some static analysis sites like Coverity use to complain about such things. Patrick
Attachment:
signature.asc
Description: PGP signature