Re: [RFC 2/2] iomap: Support subpage size dirty tracking to improve write performance

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Sun, Oct 30, 2022 at 08:57:58AM +0530, Ritesh Harjani (IBM) wrote:
> On 22/10/29 08:04AM, Dave Chinner wrote:
> > On Fri, Oct 28, 2022 at 10:00:33AM +0530, Ritesh Harjani (IBM) wrote:
> > > On a 64k pagesize platforms (specially Power and/or aarch64) with 4k
> > > filesystem blocksize, this patch should improve the performance by doing
> > > only the subpage dirty data write.
> > > 
> > > This should also reduce the write amplification since we can now track
> > > subpage dirty status within state bitmaps. Earlier we had to
> > > write the entire 64k page even if only a part of it (e.g. 4k) was
> > > updated.
> > > 
> > > Performance testing of below fio workload reveals ~16x performance
> > > improvement on nvme with XFS (4k blocksize) on Power (64K pagesize)
> > > FIO reported write bw scores improved from around ~28 MBps to ~452 MBps.
> > > 
> > > <test_randwrite.fio>
> > > [global]
> > > 	ioengine=psync
> > > 	rw=randwrite
> > > 	overwrite=1
> > > 	pre_read=1
> > > 	direct=0
> > > 	bs=4k
> > > 	size=1G
> > > 	dir=./
> > > 	numjobs=8
> > > 	fdatasync=1
> > > 	runtime=60
> > > 	iodepth=64
> > > 	group_reporting=1
> > > 
> > > [fio-run]
> > > 
> > > Reported-by: Aravinda Herle <araherle@xxxxxxxxxx>
> > > Signed-off-by: Ritesh Harjani (IBM) <ritesh.list@xxxxxxxxx>
> > 
> > To me, this is a fundamental architecture change in the way iomap
> > interfaces with the page cache and filesystems. Folio based dirty
> > tracking is top down, whilst filesystem block based dirty tracking
> > *needs* to be bottom up.
> > 
> > The bottom up approach is what bufferheads do, and it requires a
> > much bigger change that just adding dirty region tracking to the
> > iomap write and writeback paths.
> > 
> > That is, moving to tracking dirty regions on a filesystem block
> > boundary brings back all the coherency problems we had with
> > trying to keep bufferhead dirty state coherent with page dirty
> > state. This was one of the major simplifications that the iomap
> > infrastructure brought to the table - all the dirty tracking is done
> 
> Sure, but then with simplified iomap design these optimization in the 
> workload that I mentioned are also lost :(

Yes, we knew that when we first started planning to get rid of
bufferheads from XFS. That was, what, 7 years ago we started down
that path, and it's been that way in production systems since at
least 4.19.

> > by the page cache, and the filesystem has nothing to do with it at
> > all....
> > 
> > IF we are going to change this, then there needs to be clear rules
> > on how iomap dirty state is kept coherent with the folio dirty
> > state, and there need to be checks placed everywhere to ensure that
> > the rules are followed and enforced.
> 
> Sure.
> 
> > 
> > So what are the rules? If the folio is dirty, it must have at least one
> > dirty region? If the folio is clean, can it have dirty regions?
> > 
> > What happens to the dirty regions when truncate zeros part of a page
> > beyond EOF? If the iomap regions are clean, do they need to be
> > dirtied? If the regions are dirtied, do they need to be cleaned?
> > Does this hold for all trailing filesystem blocks in the (multipage)
> > folio, of just the one that spans the new EOF?
> > 
> > What happens with direct extent manipulation like fallocate()
> > operations? These invalidate the parts of the page cache over the
> > range we are punching, shifting, etc, without interacting directly
> > with iomap, so do we now have to ensure that the sub-folio dirty
> > regions are also invalidated correctly? i.e. do functions like
> > xfs_flush_unmap_range() need to become iomap infrastructure so that
> > they can update sub-folio dirty ranges correctly?
> > 
> > What about the
> > folio_mark_dirty()/filemap_dirty_folio()/.folio_dirty()
> > infrastructure? iomap currently treats this as top down, so it
> > doesn't actually call back into iomap to mark filesystem blocks
> > dirty. This would need to be rearchitected to match
> > block_dirty_folio() where the bufferheads on the page are marked
> > dirty before the folio is marked dirty by external operations....
> 
> Sure thanks for clearly listing out all of the paths. 
> Let me carefully review these paths to check on, how does adding a state 
> bitmap to iomap_page for dirty tracking is handled in every case which you 
> mentioned above. I would like to ensure, that we have reviewed all the 
> paths and functionally + theoritically this approach at least works fine.
> (Mainly I wanted to go over the truncate & fallocate paths that you listed above).

I'm kinda pointing it out all this as the reasons why we don't want
to go down this path again - per-filesystem block dirty tracking was
the single largest source of data corruption bugs in XFS back in the
days of bufferheads...

I really, really don't want the iomap infrastructure to head back in
that direction - we know it leads to on-going data corruption pain
because that's where we came from to get here.

I would much prefer we move to a different model for semi-random
sub-page writes. That was also Willy's suggestion - async
write-through caching (so the page never goes through a dirty state
for sub-page writes) is a much better model for modern high
performance storage systems than the existing buffered writeback
model....

Cheers,

Dave.
-- 
Dave Chinner
david@xxxxxxxxxxxxx



[Index of Archives]     [XFS Filesystem Development (older mail)]     [Linux Filesystem Development]     [Linux Audio Users]     [Yosemite Trails]     [Linux Kernel]     [Linux RAID]     [Linux SCSI]


  Powered by Linux