Re: [PATCH 2/2] core.fsyncobjectfiles: batch disk flushes

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Fri, Aug 27, 2021 at 05:20:44PM -0700, Neeraj Singh wrote:
> You're right. On re-read of the man page, sync_file_range is listed as
> an "extremely dangerous"
> system call.  The opportunity in the linux kernel is to offer an
> alternative set of flags or separate
> API that allows for an application like Git to separate a metadata
> writeback request from the disk flush.

How do you want to do that?  I metadata writeback without a cache flush
is worse than useless, in fact it is generally actively harmful.

To take XFS as an example:  fsync and fdatasync do the following thing:

 1) writeback all dirty data for file to the data device
 2) flush the write cache of the data device to ensure they are really
    on disk before writing back the metadata referring to them
 3) write out the log up until the log sequence that contained the last
    modifications to the file
 4) flush the cache for the log device.
    If the data device and the log device are the same (they usually are
    for common setups) and the log device support the FUA bit that writes
    through the cache, the log writes use that bit and this step can
    be skipped.

So in general there are very few metadata writes, and it is absolutely
essential to flush the cache before that, because otherwise your metadata
could point to data that might not actually have made it to disk.

The best way to optimize such a workload is by first batching all the
data writeout for multiple fils in step one, then only doing one cache
flush and one log force (as we call it) to cover all the files.  syncfs
will do that, but without a good way to pick individual files.

> Separately, I'm hoping I can push from the Windows filesystem side to
> get a barrier primitive put into
> the NVME standard so that we can offer more useful behavior to
> applications rather than these painful
> hardware flushes.

I'm not sure what you mean with barriers, but if you mean the concept
of implying a global ordering on I/Os as we did in Linux back in the
bad old days the barrier bio flag, or badly reinvented by this paper:

  https://www.usenix.org/conference/fast18/presentation/won

they might help a little bit with single threaded operations, but will
heavily degrade I/O performance for multithreaded workloads.  As an
active member of (but not speaking for) the NVMe technical working group
with a bit of knowledge of SSD internals I also doubt it will be very
well received there.



[Index of Archives]     [Linux Kernel Development]     [Gcc Help]     [IETF Annouce]     [DCCP]     [Netdev]     [Networking]     [Security]     [V4L]     [Bugtraq]     [Yosemite]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Linux SCSI]     [Fedora Users]

  Powered by Linux