On Mon, Dec 12, 2022 at 12:01:51AM +0800, ZheNing Hu wrote: > Ævar Arnfjörð Bjarmason <avarab@xxxxxxxxx> 于2022年12月9日周五 21:52写道: > > > > > > > On Fri, Dec 09 2022, ZheNing Hu wrote: > > > > > Jeff King <peff@xxxxxxxx> 于2022年12月9日周五 09:37写道: > > >> > > >> On Fri, Dec 09, 2022 at 01:49:18AM +0100, Michal Suchánek wrote: > > >> > > >> > > In this case it's the mtime on the object file (or the pack containing > > >> > > it). But yes, it is far from a complete race-free solution. > > >> > > > >> > So if you are pushing a branch that happens to reuse commits or other > > >> > objects from an earlier branh that might have been collected ín the > > >> > meantime you are basically doomed. > > >> > > >> Basically yes. We do "freshen" the mtimes on object files when we omit > > >> an object write (e.g., your index state ends up at the same tree as an > > >> old one). But for a push, there is no freshening. We check the graph at > > >> the time of the push and decide if we have everything we need (either > > >> newly pushed, or from what we already had in the repo). And that is > > >> what's racy; somebody might be deleting as that check is happening. > > >> > > >> > People deleting a branch and then pushing another variant in which many > > >> > objects are the same is a risk. > > >> > > > >> > People exporting files from somewhere and adding them to the repo which > > >> > are bit-identical when independently exported by multiple people and > > >> > sometimes deleting branches is a risk. > > >> > > >> Yes, both of those are risky (along with many other variants). > > >> > > > > > > I'm wondering if there's an easy and poor performance way to do > > > gc safely? For example, add a file lock to the repository during > > > git push and git gc? > > > > We don't have any "easy" way to do it, but we probably should. The root > > cause of the race is tricky to fix, and we don't have any "global ref > > lock". > > > > But in the context of a client<->server and wanting to do gc on the > > server a good enough and easy solution would be e.g.: > > > > 1. Have a {pre,post}-receive hook logging attempted/finished pushes > > 2. Have the pre-receive hook able to reject (or better yet, hang with > > sleep()) incoming deletions > > 3. Do a gc with a small wrapper script, which: > > - Flips the "no deletion ops now" (or "delay deletion ops") switch > > - Polls until it's sure there's no relevant in-progress operations > > - Do a full gc > > - Unlock > > > > Well, I understand that after the branch is deleted, some objects may be > unreachable, and then these objects are deleted by concurrent git gc, > which leads to data corruption in concurrent git push if these objects need > to be referenced, but what I don't understand that is it enough to just block > the operation of deleting branches? Once gc happens to be deleting an > unreachable object, and git push new branch (git receive-pack) happens to > need it, won't it still go wrong? As I understand the problem: - push figures out which objects are missing on the remote end - push starts sending the missing objects - remote gc deletes objects that are not reachable but push assumes remote has them - these might be part of a branch deleted long before gc started - push finishes and branch is advanced to point to an object that references objects that were deleted by gc -> repository is corrupted The only way to prevent this is to not delete anything ever, or to make sure that objects that are part of any ongoing operation are always referenced. Which would probably mean in practice that any operation adding objects needs to add temporary references to any objects it creates or aims to reference, and/or check reachability of referenced objects once the final object is created. Thanks Michal