On Thu, Jan 15, 2015 at 2:46 PM, Jeff King <peff@xxxxxxxx> wrote: > On Thu, Jan 15, 2015 at 02:36:11PM -0800, Stefan Beller wrote: > >> So for here is my proposal for small transactions: >> (just one ref [and/or reflog] touched): > > The implication being that a "large" transaction is any with more than > one update. Exactly. > > I think performance may suffer if you do not also take into account the > size of the packed-refs file. If you are updating 5 refs and there are > 10 in the packed-refs file, rewriting the extra 5 is probably not a big > deal. If there are 400,000 in the packed-refs file, it probably is. I'm > not sure where the cutoff is (certainly the per-ref cost is less for > packed-refs once you have started writing the file, so there is probably > some crossover percentage that you could measure). > >> * detect if we transition to a large transaction >> (by having more than one entry in transaction->updates) >> if so: >> * Pack all currently existing refs into the packed >> refs file, commit the packed refs file and delete >> all loose refs. This will avoid (d/f) conflicts. >> >> * Keep the packed-refs file locked and move the first >> transaction update into the packed-refs.lock file > > This increases lock contention, as now independent ref updates all need > to take the same packed-refs.lock. This can be a problem on a busy > repository, especially because we never retry the packed-refs lock. > We already see this problem somewhat on GitHub. Ref deletions need the > packed-refs.lock file, which can conflict with another deletion, or with > running `git pack-refs`. > > -Peff I see the performance problem as well as the contention problem you're pointing out. Dealing with loose refs however creates other problems such as directory/file conflicts on renaming. I am trying to think of a way which moves most of the failures to the transaction_update phase, such that the transaction_commit is rather easy and not expected to produce many errors. So I think dealing with a generic large transaction cannot be really solved outside the packed refs file. There could be another special case for mass deleting refs however. Or retries for the packed refs file. Or we first check if we *really* need to lock the packed refs file (just realized we already do that :/) (just curious:) May I ask on which kinds of repository you can see packed-refs.lock contention? I want to improve git atomicity, specially for 'weird' cases as presented in my previous mail[1]. Eventually I even want to have cross repository atomicty in git, so an example could be: ( cd API-Provider && edit files # new changes breaking the API git commit -a -m "..." ) && ( cd API-consumer edit files # use new and shiny API git commit -a -m "..." ) && git multipush --atomic API-Provider:origin:master API-consumer:origin:master When having such a goal a reliable and easy to use ref transaction API makes life lots more easier. By reliable I mean that there are no sudden problems as pointed out in [1], these kinds of rejections make users unhappy. And by easy to use I mean there are only a few functions I need to know and no proliferation of functions exposed in the API. Internally we can do all we want such as special cases for delete-only transactions. As another unrelated thought (400,000 refs is quite a lot) Would it make sense to have packed-refs files grouped by topic directory, i.e. one packed-ref for topic/1 topic/2 topic/* and another packed ref for feature/a feature/b feature/* This would rather help your problems with contention instead of making things more atomic though. But that would avoid 400,000 refs in one packed refs file, which then may still be easier to handle for larger transactions. Thanks, Stefan [1] http://www.mail-archive.com/git@xxxxxxxxxxxxxxx/msg63919.html -- To unsubscribe from this list: send the line "unsubscribe git" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html