Re: [RFC 2/2] iomap: Support subpage size dirty tracking to improve write performance

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Fri, Oct 28, 2022 at 10:00:33AM +0530, Ritesh Harjani (IBM) wrote:
> On a 64k pagesize platforms (specially Power and/or aarch64) with 4k
> filesystem blocksize, this patch should improve the performance by doing
> only the subpage dirty data write.
> 
> This should also reduce the write amplification since we can now track
> subpage dirty status within state bitmaps. Earlier we had to
> write the entire 64k page even if only a part of it (e.g. 4k) was
> updated.
> 
> Performance testing of below fio workload reveals ~16x performance
> improvement on nvme with XFS (4k blocksize) on Power (64K pagesize)
> FIO reported write bw scores improved from around ~28 MBps to ~452 MBps.
> 
> <test_randwrite.fio>
> [global]
> 	ioengine=psync
> 	rw=randwrite
> 	overwrite=1
> 	pre_read=1
> 	direct=0
> 	bs=4k
> 	size=1G
> 	dir=./
> 	numjobs=8
> 	fdatasync=1
> 	runtime=60
> 	iodepth=64
> 	group_reporting=1
> 
> [fio-run]
> 
> Reported-by: Aravinda Herle <araherle@xxxxxxxxxx>
> Signed-off-by: Ritesh Harjani (IBM) <ritesh.list@xxxxxxxxx>

To me, this is a fundamental architecture change in the way iomap
interfaces with the page cache and filesystems. Folio based dirty
tracking is top down, whilst filesystem block based dirty tracking
*needs* to be bottom up.

The bottom up approach is what bufferheads do, and it requires a
much bigger change that just adding dirty region tracking to the
iomap write and writeback paths.

That is, moving to tracking dirty regions on a filesystem block
boundary brings back all the coherency problems we had with
trying to keep bufferhead dirty state coherent with page dirty
state. This was one of the major simplifications that the iomap
infrastructure brought to the table - all the dirty tracking is done
by the page cache, and the filesystem has nothing to do with it at
all....

IF we are going to change this, then there needs to be clear rules
on how iomap dirty state is kept coherent with the folio dirty
state, and there need to be checks placed everywhere to ensure that
the rules are followed and enforced.

So what are the rules? If the folio is dirty, it must have at least one
dirty region? If the folio is clean, can it have dirty regions?

What happens to the dirty regions when truncate zeros part of a page
beyond EOF? If the iomap regions are clean, do they need to be
dirtied? If the regions are dirtied, do they need to be cleaned?
Does this hold for all trailing filesystem blocks in the (multipage)
folio, of just the one that spans the new EOF?

What happens with direct extent manipulation like fallocate()
operations? These invalidate the parts of the page cache over the
range we are punching, shifting, etc, without interacting directly
with iomap, so do we now have to ensure that the sub-folio dirty
regions are also invalidated correctly? i.e. do functions like
xfs_flush_unmap_range() need to become iomap infrastructure so that
they can update sub-folio dirty ranges correctly?

What about the
folio_mark_dirty()/filemap_dirty_folio()/.folio_dirty()
infrastructure? iomap currently treats this as top down, so it
doesn't actually call back into iomap to mark filesystem blocks
dirty. This would need to be rearchitected to match
block_dirty_folio() where the bufferheads on the page are marked
dirty before the folio is marked dirty by external operations....

The easy part of this problem is tracking dirty state on a
filesystem block boundaries. The *hard part* maintaining coherency
with the page cache, and none of that has been done yet. I'd prefer
that we deal with this problem once and for all at the page cache
level because multi-page folios mean even when the filesystem block
is the same as PAGE_SIZE, we have this sub-folio block granularity
tracking issue.

As it is, we already have the capability for the mapping tree to
have multiple indexes pointing to the same folio - perhaps it's time
to start thinking about using filesystem blocks as the mapping tree
index rather than PAGE_SIZE chunks, so that the page cache can then
track dirty state on filesystem block boundaries natively and
this whole problem goes away. We have to solve this sub-folio dirty
tracking problem for multi-page folios anyway, so it seems to me
that we should solve the sub-page block size dirty tracking problem
the same way....

Cheers,

Dave.
-- 
Dave Chinner
david@xxxxxxxxxxxxx



[Index of Archives]     [XFS Filesystem Development (older mail)]     [Linux Filesystem Development]     [Linux Audio Users]     [Yosemite Trails]     [Linux Kernel]     [Linux RAID]     [Linux SCSI]


  Powered by Linux