On Tue, Sep 21, 2021 at 4:41 PM Ævar Arnfjörð Bjarmason <avarab@xxxxxxxxx> wrote: > > > On Mon, Sep 20 2021, Neeraj Singh via GitGitGadget wrote: > > > When the new mode is enabled we do the following for new objects: > > > > 1. Create a tmp_obj_XXXX file and write the object data to it. > > 2. Issue a pagecache writeback request and wait for it to complete. > > 3. Record the tmp name and the final name in the bulk-checkin state for > > later rename. > > > > At the end of the entire transaction we: > > 1. Issue a fsync against the lock file to flush the hardware writeback > > cache, which should by now have processed the tmp file writes. > > 2. Rename all of the temp files to their final names. > > 3. When updating the index and/or refs, we assume that Git will issue > > another fsync internal to that operation. > > Perhaps note too that: > > 4. For loose objects, refs etc. we may or may not create directories, > and most certainly will be updating metadata on the immediate > directory containing the file, but none of that's fsync()'d. > > > On a filesystem with a singular journal that is updated during name > > operations (e.g. create, link, rename, etc), such as NTFS and HFS+, we > > would expect the fsync to trigger a journal writeout so that this > > sequence is enough to ensure that the user's data is durable by the time > > the git command returns. > > > > This change also updates the macOS code to trigger a real hardware flush > > via fnctl(fd, F_FULLFSYNC) when fsync_or_die is called. Previously, on > > macOS there was no guarantee of durability since a simple fsync(2) call > > does not flush any hardware caches. > > There's no discussion of whether this is or isn't known to also work > some Linux FS's, and for these OS's where this does work is this only > for the object files themselves, or does metadata also "ride along"? > I unfortunately can't examine Linux kernel source code and the details of metadata consistency behavior across files is not something that anyone in that group wants to pin down. As far as I can tell, the only thing that's really guaranteed is fsyncing every single file you write down and its parent directory if you're creating a new file (which we always are). As came up in conversation with Christoph Hellwig elsewhere on thread, Linux doesn't have any set of syscalls to make batch mode safe. It does look like XFS would be safe if sync_file_ranges actually promised to wait for all pagecache writeback definitively, since it would do a "log force" to push all the dirty metadata to disk when we do our final fsync. I really didn't want to say something definitive about what Linux can or will do, since I'm not in a position to really know or influence them. Christoph did say that he would be interested in contributing a variant to this patch that would be definitively safe on filesystems that honor syncfs. > > _Performance numbers_: > > > > Linux - Hyper-V VM running Kernel 5.11 (Ubuntu 20.04) on a fast SSD. > > Mac - macOS 11.5.1 running on a Mac mini on a 1TB Apple SSD. > > Windows - Same host as Linux, a preview version of Windows 11. > > This number is from a patch later in the series. > > > > Adding 500 files to the repo with 'git add' Times reported in seconds. > > > > core.fsyncObjectFiles | Linux | Mac | Windows > > ----------------------|-------|-------|-------- > > false | 0.06 | 0.35 | 0.61 > > true | 1.88 | 11.18 | 2.47 > > batch | 0.15 | 0.41 | 1.53 > > Per my https://lore.kernel.org/git/87mtp5cwpn.fsf@xxxxxxxxxxxxxxxxxxx > and 6/6 in this series we've got perf tests for add/stash, but it would > be really interesting to see how this is impacted by > transfer.unpackLimit in cases where we may be writing packs or loose > objects. I'm having trouble understanding how unpackLimit is related to 'git stash' or 'git add'. From code inspection, it doesn't look like we're using those settings for adding objects except from across a transport. Are you proposing that we have a similar setting for adding objects via 'add' using a packfile? I think that would be a good goal, but it might be a bit tricky since we've likely done a lot of the work to buffer the input objects in order to compute their OIDs, before we know how many objects there are to add. If the policy were to "always add to a packfile", it would be easier. > > > [...] > > core.fsyncObjectFiles:: > > - This boolean will enable 'fsync()' when writing object files. > > -+ > > -This is a total waste of time and effort on a filesystem that orders > > -data writes properly, but can be useful for filesystems that do not use > > -journalling (traditional UNIX filesystems) or that only journal metadata > > -and not file contents (OS X's HFS+, or Linux ext3 with "data=writeback"). > > + A value indicating the level of effort Git will expend in > > + trying to make objects added to the repo durable in the event > > + of an unclean system shutdown. This setting currently only > > + controls the object store, so updates to any refs or the > > + index may not be equally durable. > > All these mentions of "object" should really clarify that it's "loose > objects", i.e. we always fsync pack files. > > > +* `false` allows data to remain in file system caches according to > > + operating system policy, whence it may be lost if the system loses power > > + or crashes. > > As noted in point #4 of > https://lore.kernel.org/git/87mtp5cwpn.fsf@xxxxxxxxxxxxxxxxxxx/ while > this direction is overall an improvement over the previously flippant > docs, they at least alluded to the context that the assumption behind > "false" is that you don't really care about loose objects, you care > about loose objects *and* the ref update or whatever. > > As I think (this is from memory) we've covered already this may have > been all based on some old ext3 assumption, but it's probably worth > summarizing that here, i.e. if you've got an FS with global ordered > operations you can probably skip this, but probably not etc. > > > +* `true` triggers a data integrity flush for each object added to the > > + object store. This is the safest setting that is likely to ensure durability > > + across all operating systems and file systems that honor the 'fsync' system > > + call. However, this setting comes with a significant performance cost on > > + common hardware. > > This is really overpromising things by omitting the fact that eve if > we're getting this feature you've hacked up right, we're still not > fsyncing dir entries etc (also noted above). > > So something that describes the narrow scope here, along with "loose > objects" etc.... > > > +* `batch` enables an experimental mode that uses interfaces available in some > > + operating systems to write object data with a minimal set of FLUSH CACHE > > + (or equivalent) commands sent to the storage controller. If the operating > > + system interfaces are not available, this mode behaves the same as `true`. > > + This mode is expected to be safe on macOS for repos stored on HFS+ or APFS > > + filesystems and on Windows for repos stored on NTFS or ReFS. > > Again, even if it's called "core.fsyncObjectFiles" if we're going to say > "safe" we really need to say safe in what sense. Having written and > fsync()'d the file is helping nobody if the metadata never arrives.... > My concern with your feedback here is that this is user-facing documentation. I'd assume that people who are not intimately familiar with both their filesystem and Git's internals would just be completely mystified by a long commentary on the specifics in the Config documentation. I think over time Git should focus on making this setting really guarantee durability in a meaningful way across the entire repository. > > +static void do_sync_and_rename(struct string_list *fsync_state, struct lock_file *lock_file) > > +{ > > + if (fsync_state->nr) { > > I think less indentation here would be nice: > > if (!fsync_state->nr) > return; > /* rest of unindented body */ > Will fix. > Or better yet do this check in unplug_bulk_checkin(), then here: > > fsync_or_die(); > for_each_string_list_item() { ...} > string_list_clear(....); > > I'd prefer to put it in the callee for reasons of separation-of-concerns. I don't want to have the caller and callee partially implement the contract. The compiler should do a good enough job, since it's only one caller and will probably get totally inilined. > > + struct string_list_item *rename; > > + > > + /* > > + * Issue a full hardware flush against the lock file to ensure > > + * that all objects are durable before any renames occur. > > + * The code in fsync_and_close_loose_object_bulk_checkin has > > + * already ensured that writeout has occurred, but it has not > > + * flushed any writeback cache in the storage hardware. > > + */ > > + fsync_or_die(get_lock_file_fd(lock_file), get_lock_file_path(lock_file)); > > + > > + for_each_string_list_item(rename, fsync_state) { > > + const char *src = rename->string; > > + const char *dst = rename->util; > > + > > + if (finalize_object_file(src, dst)) > > + die_errno(_("could not rename '%s' to '%s'"), src, dst); > > + } > > + > > + string_list_clear(fsync_state, 1); > > + } > > +} > > + > > static int already_written(struct bulk_checkin_state *state, struct object_id *oid) > > { > > int i; > > @@ -256,6 +286,53 @@ static int deflate_to_pack(struct bulk_checkin_state *state, > > return 0; > > } > > > > +static void add_rename_bulk_checkin(struct string_list *fsync_state, > > + const char *src, const char *dst) > > +{ > > + string_list_insert(fsync_state, src)->util = xstrdup(dst); > > +} > > Just has one caller, why not just inline the string_list_insert() > call... > I thought about doing that before. I'll do it. > > +int fsync_and_close_loose_object_bulk_checkin(int fd, const char *tmpfile, > > + const char *filename, time_t mtime) > > +{ > > + int do_finalize = 1; > > + int ret = 0; > > + > > + if (fsync_object_files != FSYNC_OBJECT_FILES_OFF) { > > Let's do postive enum comparisons, and with switch() statements, so the > compiler helps us to see if we've covered them all. > Ok, will switch to switch. > > + /* > > + * If we have a plugged bulk checkin, we issue a call that > > + * cleans the filesystem page cache but avoids a hardware flush > > + * command. Later on we will issue a single hardware flush > > + * before renaming files as part of do_sync_and_rename. > > + */ > > + if (bulk_checkin_plugged && > > + fsync_object_files == FSYNC_OBJECT_FILES_BATCH && > > + git_fsync(fd, FSYNC_WRITEOUT_ONLY) >= 0) { > > + add_rename_bulk_checkin(&bulk_fsync_state, tmpfile, filename); > > + do_finalize = 0; > > + > > + } else { > > + fsync_or_die(fd, "loose object file"); > > + } > > + } > > So nothing ever explicitly checks FSYNC_OBJECT_FILES_ON...? > Yeah, I did it this way to avoid any code duplication, but I can change to a switch if it doesn't require too much repetition. > > -extern int fsync_object_files; > > +enum FSYNC_OBJECT_FILES_MODE { > > + FSYNC_OBJECT_FILES_OFF, > > + FSYNC_OBJECT_FILES_ON, > > + FSYNC_OBJECT_FILES_BATCH > > +}; > > Style: We don't use ALL_CAPS for type names in this codebase, just the > enum labels themselves.... > > > +extern enum FSYNC_OBJECT_FILES_MODE fsync_object_files; > > ...to the point where I had to rub my eyes to see what was going on here > ... :) > Sorry, Windows Developer :). Will fix. > > - fsync_object_files = git_config_bool(var, value); > > + if (value && !strcmp(value, "batch")) > > + fsync_object_files = FSYNC_OBJECT_FILES_BATCH; > > + else if (git_config_bool(var, value)) > > + fsync_object_files = FSYNC_OBJECT_FILES_ON; > > + else > > + fsync_object_files = FSYNC_OBJECT_FILES_OFF; > > Since the point of this setting is safety, let's explicitly check > true/false here, use git_config_maybe_bool(), and perhaps issue a > warning on unknown values, but maybe that would get too verbose... > > If we have a future "supersafe" mode, it'll get mapped to "false" on > older versions of git, probably not a good idea... > I took Junio's suggestion verbatim. I'll try a warning if the value exists, and is not 'batch' or <maybe bool>. Thanks for looking at my changes so thoroughly! -Neeraj