Re: LSF/MM/BPF 2023 IOMAP conversion status update

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Sun, Jan 29, 2023 at 05:06:47AM +0000, Matthew Wilcox wrote:
> On Sat, Jan 28, 2023 at 08:46:45PM -0800, Luis Chamberlain wrote:
> > I'm hoping this *might* be useful to some, but I fear it may leave quite
> > a bit of folks with more questions than answers as it did for me. And
> > hence I figured that *this aspect of this topic* perhaps might be a good
> > topic for LSF.  The end goal would hopefully then be finally enabling us
> > to document IOMAP API properly and helping with the whole conversion
> > effort.
> 
> +1 from me.
> 
> I've made a couple of abortive efforts to try and convert a "trivial"
> filesystem like ext2/ufs/sysv/jfs to iomap, and I always get hung up on
> what the semantics are for get_block_t and iomap_begin().

Yup.  I wrote about it a little bit here:

https://lore.kernel.org/linux-fsdevel/Y%2Fz%2FJrV8qRhUcqE7@magnolia/T/#mda6c3175857d1e4cba88dca042fee030207df4f6

...and promised that I'd get back to writeback.

For buffered IO, iomap does things in a much different order than (I
think) most filesystems.  Traditionally, I think the order is i_rwsem ->
mmap_invalidatelock(?) -> page lock -> get mapping.

iomap relies on the callers to take i_rwsem, asks the filesystem for a
mapping (with whatever locking that entails), and only then starts
locking pagecache folios to operate on them.  IOWs, involving the
filesystem earlier in the process enables it to make better decisions
about space allocations, which in turn should make things faster and
less fragmenty.

OTOH, it also means that we've learned the hard way that pagecache
operations need a means to revalidate mappings to avoid write races.
This applies both to the initial pagecache write and to scheduling
writeback, but the mechanisms for each were developed separately and
years apart.  See iomap::validity_cookie and
xfs_writepage_ctx::{data,cow}_seq for what I'm talking about.
We (xfs developers) ought to figure out if these two mechanisms should
be merged before more filesystems start using iomap for buffered io.

I'd like to have a discussion about how to clean up and clarify the
iomap interfaces, and a separate one about how to port the remaining 35+
filesystems.  I don't know how exactly to split this into LSF sessions,
other than to suggest at least two.

If hch or dchinner show up, I also want to drag them into this. :)

--D

> > Perhaps fs/buffers.c could be converted to folios only, and be done
> > with it. But would we be loosing out on something? What would that be?
> 
> buffer_heads are inefficient for multi-page folios because some of the
> algorthims are O(n^2) for n being the number of buffers in a folio.
> It's fine for 8x 512b buffers in a 4k page, but for 512x 4kb buffers in
> a 2MB folio, it's pretty sticky.  Things like "Read I/O has completed on
> this buffer, can I mark the folio as Uptodate now?"  For iomap, that's a
> scan of a 64 byte bitmap up to 512 times; for BHs, it's a loop over 512
> allocations, looking at one bit in each BH before moving on to the next.
> Similarly for writeback, iirc.
> 
> So +1 from me for a "How do we convert 35-ish block based filesystems
> from BHs to iomap for their buffered & direct IO paths".  There's maybe a
> separate discussion to be had for "What should the API be for filesystems
> to access metadata on the block device" because I don't believe the
> page-cache based APIs are easy for fs authors to use.
> 
> Maybe some related topics are
> "What testing should we require for some of these ancient filesystems?"
> "Whose job is it to convert these 35 filesystems anyway, can we just
> delete some of them?"
> "Is there a lower-performance but easier-to-implement API than iomap
> for old filesystems that only exist for compatibiity reasons?"
> 




[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux