On Tue, Sep 21 2021, Neeraj Singh wrote: > On Tue, Sep 21, 2021 at 4:41 PM Ævar Arnfjörð Bjarmason > <avarab@xxxxxxxxx> wrote: >> >> >> On Mon, Sep 20 2021, Neeraj Singh via GitGitGadget wrote: >> >> > When the new mode is enabled we do the following for new objects: >> > >> > 1. Create a tmp_obj_XXXX file and write the object data to it. >> > 2. Issue a pagecache writeback request and wait for it to complete. >> > 3. Record the tmp name and the final name in the bulk-checkin state for >> > later rename. >> > >> > At the end of the entire transaction we: >> > 1. Issue a fsync against the lock file to flush the hardware writeback >> > cache, which should by now have processed the tmp file writes. >> > 2. Rename all of the temp files to their final names. >> > 3. When updating the index and/or refs, we assume that Git will issue >> > another fsync internal to that operation. >> >> Perhaps note too that: >> >> 4. For loose objects, refs etc. we may or may not create directories, >> and most certainly will be updating metadata on the immediate >> directory containing the file, but none of that's fsync()'d. >> >> > On a filesystem with a singular journal that is updated during name >> > operations (e.g. create, link, rename, etc), such as NTFS and HFS+, we >> > would expect the fsync to trigger a journal writeout so that this >> > sequence is enough to ensure that the user's data is durable by the time >> > the git command returns. >> > >> > This change also updates the macOS code to trigger a real hardware flush >> > via fnctl(fd, F_FULLFSYNC) when fsync_or_die is called. Previously, on >> > macOS there was no guarantee of durability since a simple fsync(2) call >> > does not flush any hardware caches. >> >> There's no discussion of whether this is or isn't known to also work >> some Linux FS's, and for these OS's where this does work is this only >> for the object files themselves, or does metadata also "ride along"? >> > > I unfortunately can't examine Linux kernel source code and the details > of metadata > consistency behavior across files is not something that anyone in that > group wants > to pin down. As far as I can tell, the only thing that's really > guaranteed is fsyncing > every single file you write down and its parent directory if you're > creating a new file > (which we always are). As came up in conversation with Christoph > Hellwig elsewhere > on thread, Linux doesn't have any set of syscalls to make batch mode > safe. It does look > like XFS would be safe if sync_file_ranges actually promised to wait > for all pagecache > writeback definitively, since it would do a "log force" to push all > the dirty metadata to > disk when we do our final fsync. > > I really didn't want to say something definitive about what Linux can > or will do, since I'm > not in a position to really know or influence them. Christoph did say > that he would be > interested in contributing a variant to this patch that would be > definitively safe on filesystems > that honor syncfs. *nod*, it's fine if it's omitted. Just wondering if we knew but weren't saying etc. >> > _Performance numbers_: >> > >> > Linux - Hyper-V VM running Kernel 5.11 (Ubuntu 20.04) on a fast SSD. >> > Mac - macOS 11.5.1 running on a Mac mini on a 1TB Apple SSD. >> > Windows - Same host as Linux, a preview version of Windows 11. >> > This number is from a patch later in the series. >> > >> > Adding 500 files to the repo with 'git add' Times reported in seconds. >> > >> > core.fsyncObjectFiles | Linux | Mac | Windows >> > ----------------------|-------|-------|-------- >> > false | 0.06 | 0.35 | 0.61 >> > true | 1.88 | 11.18 | 2.47 >> > batch | 0.15 | 0.41 | 1.53 >> >> Per my https://lore.kernel.org/git/87mtp5cwpn.fsf@xxxxxxxxxxxxxxxxxxx >> and 6/6 in this series we've got perf tests for add/stash, but it would >> be really interesting to see how this is impacted by >> transfer.unpackLimit in cases where we may be writing packs or loose >> objects. > > I'm having trouble understanding how unpackLimit is related to 'git stash' > or 'git add'. From code inspection, it doesn't look like we're using > those settings > for adding objects except from across a transport. > > Are you proposing that we have a similar setting for adding objects > via 'add' using > a packfile? I think that would be a good goal, but it might be a bit > tricky since we've > likely done a lot of the work to buffer the input objects in order to > compute their OIDs, > before we know how many objects there are to add. If the policy were > to "always add to > a packfile", it would be easier. No, just that in the documentation that we should be explaining to the reader that this mode that optimizes for loose object writing benefits particular commands, but e.g. on the server-side that we'll probably never write 500 objects, but stream them to one pack. Which might also inform next steps for the commands this does help with, i.e. can we make more things stream to packs? I think having this mode is at worst a good transitory thing to have, but perhaps longer term we'll want to simply write fewer individual loose objects. In any case, pushing to a server with this configured and scaling that by transfer.unpackLimit should nicely demonstrate the pack v.s. loose object scenario at different fsck-settings. >> >> > [...] >> > core.fsyncObjectFiles:: >> > - This boolean will enable 'fsync()' when writing object files. >> > -+ >> > -This is a total waste of time and effort on a filesystem that orders >> > -data writes properly, but can be useful for filesystems that do not use >> > -journalling (traditional UNIX filesystems) or that only journal metadata >> > -and not file contents (OS X's HFS+, or Linux ext3 with "data=writeback"). >> > + A value indicating the level of effort Git will expend in >> > + trying to make objects added to the repo durable in the event >> > + of an unclean system shutdown. This setting currently only >> > + controls the object store, so updates to any refs or the >> > + index may not be equally durable. >> >> All these mentions of "object" should really clarify that it's "loose >> objects", i.e. we always fsync pack files. >> >> > +* `false` allows data to remain in file system caches according to >> > + operating system policy, whence it may be lost if the system loses power >> > + or crashes. >> >> As noted in point #4 of >> https://lore.kernel.org/git/87mtp5cwpn.fsf@xxxxxxxxxxxxxxxxxxx/ while >> this direction is overall an improvement over the previously flippant >> docs, they at least alluded to the context that the assumption behind >> "false" is that you don't really care about loose objects, you care >> about loose objects *and* the ref update or whatever. >> >> As I think (this is from memory) we've covered already this may have >> been all based on some old ext3 assumption, but it's probably worth >> summarizing that here, i.e. if you've got an FS with global ordered >> operations you can probably skip this, but probably not etc. >> >> > +* `true` triggers a data integrity flush for each object added to the >> > + object store. This is the safest setting that is likely to ensure durability >> > + across all operating systems and file systems that honor the 'fsync' system >> > + call. However, this setting comes with a significant performance cost on >> > + common hardware. >> >> This is really overpromising things by omitting the fact that eve if >> we're getting this feature you've hacked up right, we're still not >> fsyncing dir entries etc (also noted above). >> >> So something that describes the narrow scope here, along with "loose >> objects" etc.... >> >> > +* `batch` enables an experimental mode that uses interfaces available in some >> > + operating systems to write object data with a minimal set of FLUSH CACHE >> > + (or equivalent) commands sent to the storage controller. If the operating >> > + system interfaces are not available, this mode behaves the same as `true`. >> > + This mode is expected to be safe on macOS for repos stored on HFS+ or APFS >> > + filesystems and on Windows for repos stored on NTFS or ReFS. >> >> Again, even if it's called "core.fsyncObjectFiles" if we're going to say >> "safe" we really need to say safe in what sense. Having written and >> fsync()'d the file is helping nobody if the metadata never arrives.... >> > > My concern with your feedback here is that this is user-facing documentation. > I'd assume that people who are not intimately familiar with both their > filesystem > and Git's internals would just be completely mystified by a long commentary on > the specifics in the Config documentation. I think over time Git should focus on > making this setting really guarantee durability in a meaningful way > across the entire > repository. Yeah, this setting though is probably going to be tweaked only by fairly expert-level users of git. I think it's fine if it just explicitly punts and says something like 'this is what it does, this may or may not work on your FS' etc., my main issue with the current docs is that they give off this vibe of knowing a lot more than they're telling you. >> > +static void do_sync_and_rename(struct string_list *fsync_state, struct lock_file *lock_file) >> > +{ >> > + if (fsync_state->nr) { >> >> I think less indentation here would be nice: >> >> if (!fsync_state->nr) >> return; >> /* rest of unindented body */ >> > > Will fix. > >> Or better yet do this check in unplug_bulk_checkin(), then here: >> >> fsync_or_die(); >> for_each_string_list_item() { ...} >> string_list_clear(....); >> >> > > I'd prefer to put it in the callee for reasons of > separation-of-concerns. I don't want > to have the caller and callee partially implement the contract. The > compiler should > do a good enough job, since it's only one caller and will probably get > totally inilined. *nod* For what it's worth I meant the "inlined" just in terms of avoiding the indirection for human readers, it won't matter to the machine, especially since this is all I/O bound...