Re: [RFC 2/2] iomap: Support subpage size dirty tracking to improve write performance

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Christoph Hellwig <hch@xxxxxxxxxxxxx> wrote:

> The core iomap code (fs/iomap/iter.c) does not.  Most users of it
> are block device centric right now, but for example the dax.c uses
> iomap for byte level DAX accesses without ever looking at a bdev,
> and seek.c and fiemap.c do not make any assumptions on the backend
> implementation.

Whilst that is true, what's in iter.c is extremely minimal and most definitely
not sufficient.  There's no retry logic, for example: what happens when we try
poking the cache and the cache says "no data"?  We have to go back and
redivide the missing bits of the request as the netfs granularity may not
match that of the cache.  Also, how to deal with writes that have to be
duplicated to multiple servers that don't all have the same wsize?

Then functions like iomap_read_folio(), iomap_readahead(), etc. *do* use
submit_bio().  These would seem like they're meant to be the main entry points
into iomap.

Add to that struct iomap_iter has two bdev pointers and two dax pointers and
the iomap_ioend struct assumes bio structs are going to be involved.

Also, there's struct iomap_page - I'm hoping to avoid the need for a dangly
struct on each page.  I *think* I only need an extra couple of bits per page
to discriminate between pages that need writing to the cache, pages that need
writing to the server, and pages that need to go to both - but I think it
might be possible to keep track of that in a separate list.  The vast majority
of write patterns are {open,write,write,...,write,close} and for such I just
need a single tracking struct.

David




[Index of Archives]     [XFS Filesystem Development (older mail)]     [Linux Filesystem Development]     [Linux Audio Users]     [Yosemite Trails]     [Linux Kernel]     [Linux RAID]     [Linux SCSI]


  Powered by Linux