On Thu, Nov 04, 2021 at 02:14:54PM +0100, Ævar Arnfjörð Bjarmason wrote: > > On Thu, Nov 04 2021, Patrick Steinhardt wrote: > > > When writing loose refs, we first create a lockfile, write the new ref > > into that lockfile, close it and then rename the lockfile into place > > such that the actual update is atomic for that single ref. While this > > works as intended under normal circumstences, at GitLab we infrequently > > encounter corrupt loose refs in repositories after a machine encountered > > a hard reset. The corruption is always of the same type: the ref has > > been committed into place, but it is completely empty. > > > > The root cause of this is likely that we don't sync contents of the > > lockfile to disk before renaming it into place. As a result, it's not > > guaranteed that the contents are properly persisted and one may observe > > weird in-between states on hard resets. Quoting ext4 documentation [1]: > > > > Many broken applications don't use fsync() when replacing existing > > files via patterns such as fd = > > open("foo.new")/write(fd,..)/close(fd)/ rename("foo.new", "foo"), or > > worse yet, fd = open("foo", O_TRUNC)/write(fd,..)/close(fd). If > > auto_da_alloc is enabled, ext4 will detect the replace-via-rename > > and replace-via-truncate patterns and force that any delayed > > allocation blocks are allocated such that at the next journal > > commit, in the default data=ordered mode, the data blocks of the new > > file are forced to disk before the rename() operation is committed. > > This provides roughly the same level of guarantees as ext3, and > > avoids the "zero-length" problem that can happen when a system > > crashes before the delayed allocation blocks are forced to disk. > > > > This explicitly points out that one must call fsync(3P) before doing the > > rename(3P) call, or otherwise data may not be correctly persisted to > > disk. > > > > Fix this by always flushing refs to disk before committing them into > > place to avoid this class of corruption. > > > > [1]: https://www.kernel.org/doc/Documentation/filesystems/ext4.txt > > > > Signed-off-by: Patrick Steinhardt <ps@xxxxxx> > > --- > > refs/files-backend.c | 1 + > > 1 file changed, 1 insertion(+) > > > > diff --git a/refs/files-backend.c b/refs/files-backend.c > > index 151b0056fe..06a3f0bdea 100644 > > --- a/refs/files-backend.c > > +++ b/refs/files-backend.c > > @@ -1749,6 +1749,7 @@ static int write_ref_to_lockfile(struct ref_lock *lock, > > fd = get_lock_file_fd(&lock->lk); > > if (write_in_full(fd, oid_to_hex(oid), the_hash_algo->hexsz) < 0 || > > write_in_full(fd, &term, 1) < 0 || > > + fsync(fd) < 0 || > > close_ref_gently(lock) < 0) { > > strbuf_addf(err, > > "couldn't write '%s'", get_lock_file_path(&lock->lk)); > > Yeah, that really does seem like it's the cause of such zeroing out > issues. > > This has a semantic conflict with some other changes in flight, see: > > git log -p origin/master..origin/seen -- write-or-die.c > > I.e. here you do want to not die, so fsync_or_die() doesn't make sense > per-se, but in those changes that function has grown to mean > fsync_with_configured_strategy_or_die(). > > Also we need the loop around fsync, see cccdfd22436 (fsync(): be > prepared to see EINTR, 2021-06-04). > > I think it would probably be best to create a git_fsync_fd() function > which is non-fatal and has that config/while loop, and have > fsync_or_die() be a "..or die()" wrapper around that, then you could > call that git_fsync_fd() here. Thanks for pointing it out, I'll base v2 on next in that case. > On the change more generally there's some performance numbers quoted at, > so re the recent discussions about fsync() performance I wonder how this > changes things. Yeah, good question. I punted on doing benchmarks for this given that I wasn't completely sure whether there's any preexisting ones which would fit best here. No matter the results, I'd still take the stance that we should by default try to do the right thing and try hard to not end up with corrupt data, and if filesystem docs explicitly say we must fsync(3P) then that's what we should be doing. That being said, I wouldn't mind introducing something like `core.fsyncObjectFiles` for refs, too, so that folks who want an escape hatch have one. > I've also noted in those threads recently that our overall use of fsync > is quite, bad, and especially when it comes to assuming that we don't > need to fsync dir entries, which we still don't do here. Yeah. I also thought about not putting the fsync(3P) logic into ref logic, but instead into our lockfiles. In theory, we should always be doing this before committing lockfiles into place, so it would fit in there quite naturally. > The ext4 docs seem to suggest that this will be the right thing to do in > either case, but I wonder if this won't increase the odds of corruption > on some other fs's. > > I.e. before we'd write() && rename() without the fsync(), so on systems > that deferred fsync() until some global sync point we might have been > able to rely on those happening atomically (although clearly not on > others, e.g. ext4). > > But now we'll fsync() the data explicitly, then do a rename(), but we > don't fsync the dir entry, so per POSIX an external application can't > rely on seeing that rename yet. Will that bite us still, but just in > another way on some other systems? > > 1. https://stackoverflow.com/questions/7433057/is-rename-without-fsync-safe Good point. I'd happy to extend this patch to also fsync(3P) the dir entry. But it does sound like even more of a reason to move the logic into the lockfiles such that we don't have to duplicate it wherever we really don't want to end up with corrupted data. Patrick
Attachment:
signature.asc
Description: PGP signature