Re: Dirty bits and sync writes

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Mon, 2021-08-09 at 16:30 +0100, Matthew Wilcox wrote:
> On Mon, Aug 09, 2021 at 03:48:56PM +0100, Christoph Hellwig wrote:
> > On Tue, Aug 03, 2021 at 04:28:14PM +0100, Matthew Wilcox wrote:
> > > Solution 1: Add an array of dirty bits to the iomap_page
> > > data structure.  This patch already exists; would need
> > > to be adjusted slightly to apply to the current tree.
> > > https://lore.kernel.org/linux-xfs/7fb4bb5a-adc7-5914-3aae-179dd8f3adb1@xxxxxxxxxx/
> > 
> > > Solution 2a: Replace the array of uptodate bits with an array of
> > > dirty bits.  It is not often useful to know which parts of the page are
> > > uptodate; usually the entire page is uptodate.  We can actually use the
> > > dirty bits for the same purpose as uptodate bits; if a block is dirty, it
> > > is definitely uptodate.  If a block is !dirty, and the page is !uptodate,
> > > the block may or may not be uptodate, but it can be safely re-read from
> > > storage without losing any data.
> > 
> > 1 or 2a seems like something we should do once we have lage folio
> > support.
> > 
> > 
> > > Solution 2b: Lose the concept of partially uptodate pages.  If we're
> > > going to write to a partial page, just bring the entire page uptodate
> > > first, then write to it.  It's not clear to me that partially-uptodate
> > > pages are really useful.  I don't know of any network filesystems that
> > > support partially-uptodate pages, for example.  It seems to have been
> > > something we did for buffer_head based filesystems "because we could"
> > > rather than finding a workload that actually cares.
> > 

I may be wrong, but I thought NFS actually could deal with partially
uptodate pages. In some cases it can opt to just do a write to a page
w/o reading first and flush just that section when the time comes.

I think the heuristics are in nfs_want_read_modify_write(). #3 may be a
better way though.

> > The uptodate bit is important for the use case of a smaller than page
> > size buffered write into a page that hasn't been read in already, which
> > is fairly common for things like log writes.  So I'd hate to lose this
> > optimization.
> > 
> > > (it occurs to me that solution 3 actually allows us to do IOs at storage
> > > block size instead of filesystem block size, potentially reducing write
> > > amplification even more, although we will need to be a bit careful if
> > > we're doing a CoW.)
> > 
> > number 3 might be nice optimization.  The even better version would
> > be a disk format change to just log those updates in the log and
> > otherwise use the normal dirty mechanism.  I once had a crude prototype
> > for that.
> 
> That's a bit beyond my scope at this point.  I'm currently working on
> write-through.  Once I have that working, I think the next step is:
> 
>  - Replace the ->uptodate array with a ->dirty array
>  - If the entire page is Uptodate, drop the iomap_page.  That means that
>    writebacks will write back the entire folio, not just the dirty
>    pieces.
>  - If doing a partial page write
>    - If the write is block-aligned (offset & length), leave the page
>      !Uptodate and mark the dirty blocks
>    - Otherwise bring the entire page Uptodate first, then mark it dirty
> 
> To take an example of a 512-byte block size file accepting a 520 byte
> write at offset 500, we currently submit two reads, one for bytes 0-511
> and the second for 1024-1535.  We're better off submitting a read for
> bytes 0-4095 and then overwriting the entire thing.
> 
> But it's still better to do no reads at all if someone submits a write
> for bytes 512-1023, or 512-N where N is past EOF.  And I'd preserve
> that behaviour.
> 

I like this idea too.

I'd also point out that both cifs and ceph (at least) can read and write
"around" the cache in some cases (using non-pagecache pages) when they
can't get the proper oplock/lease/caps from the server. Both of them
have completely separate "uncached" codepaths, that are distinct from
the O_DIRECT cases.

This scheme could potentially be a saner method of dealing with those
situations too.
-- 
Jeff Layton <jlayton@xxxxxxxxxx>




[Index of Archives]     [XFS Filesystem Development (older mail)]     [Linux Filesystem Development]     [Linux Audio Users]     [Yosemite Trails]     [Linux Kernel]     [Linux RAID]     [Linux SCSI]


  Powered by Linux