Patrick Steinhardt <ps@xxxxxx> writes: > On Wed, Feb 05, 2025 at 10:40:41AM -0800, Junio C Hamano wrote: >> If the user, on the other hand, is interested in keeping track of >> all these thousands of refs, "git fetch" would have to ask and >> receive advertisement for all these thousands of refs anyway, and >> at that point, recording the no-op update would be a very small >> part of the problem, I suspect. Besides, we have reftable that >> would make this kind of problem easier to solve, no? ;-) > > Yeah, I was pondering whether to bring up reftables or not :) But > indeed, with them it would be way more efficient, at least assuming that > we write everything in a single transaction and not via multiple > transactions. Which we generally don't in git-fetch(1) unless the user > asks for `--atomic` because we allow for a subset of the updates to > fail. Consequently, even with reftables we'd end up writing N separate > updates, where N is the number of advertised refs. > > This is a known problem that we actually plan to fix. Karthik is working > on support for "partial" transactions, where it is allowed that a subset > of ref updates fails without impacting other refs where the update would > succeed. With this in place we could then refactor git-fetch(1) to write > the update with a single transaction, only, even in the non-atomic case. > You've played my hand here, I've posted the series now [1] and agree with everything you've said here. It should really help with optimizing reftables. [1]: https://lore.kernel.org/git/20250207-245-partially-atomic-ref-updates-v1-0-e6a3690ff23a@xxxxxxxxx/T/#t Thanks > Patrick
Attachment:
signature.asc
Description: PGP signature