On Mon, Jul 26, 2021 at 12:16:39AM +0200, Andreas Gruenbacher wrote: > Here's a fixed and cleaned up version that passes fstests on gfs2. > > I see no reason why the combination of tail packing + writing should > cause any issues, so in my opinion, the check that disables that > combination in iomap_write_begin_inline should still be removed. Since there is no such fs for tail-packing write, I just do a wild guess, for example, 1) the tail-end block was not inlined, so iomap_write_end() dirtied the whole page (or buffer) for the page writeback; 2) then it was truncated into a tail-packing inline block so the last extent(page) became INLINE but dirty instead; 3) during the late page writeback for dirty pages, if (WARN_ON_ONCE(wpc->iomap.type == IOMAP_INLINE)) would be triggered in iomap_writepage_map() for such dirty page. As Matthew pointed out before, https://lore.kernel.org/r/YPrms0fWPwEZGNAL@xxxxxxxxxxxxxxxxxxxx/ currently tail-packing inline won't interact with page writeback, but I'm afraid a supported tail-packing write fs needs to reconsider the whole stuff how page, inode writeback works and what the pattern is with the tail-packing. > > It turns out that returning the number of bytes copied from > iomap_read_inline_data is a bit irritating: the function is really used > for filling the page, but that's not always the "progress" we're looking > for. In the iomap_readpage case, we actually need to advance by an > antire page, but in the iomap_file_buffered_write case, we need to > advance by the length parameter of iomap_write_actor or less. So I've > changed that back. > > I've also renamed iomap_inline_buf to iomap_inline_data and I've turned > iomap_inline_data_size_valid into iomap_within_inline_data, which seems > more useful to me. > > Thanks, > Andreas > > -- > > Subject: [PATCH] iomap: Support tail packing > > The existing inline data support only works for cases where the entire > file is stored as inline data. For larger files, EROFS stores the > initial blocks separately and then can pack a small tail adjacent to the > inode. Generalise inline data to allow for tail packing. Tails may not > cross a page boundary in memory. > > We currently have no filesystems that support tail packing and writing, > so that case is currently disabled (see iomap_write_begin_inline). I'm > not aware of any reason why this code path shouldn't work, however. > > Cc: Christoph Hellwig <hch@xxxxxx> > Cc: Darrick J. Wong <djwong@xxxxxxxxxx> > Cc: Matthew Wilcox <willy@xxxxxxxxxxxxx> > Cc: Andreas Gruenbacher <andreas.gruenbacher@xxxxxxxxx> > Tested-by: Huang Jianan <huangjianan@xxxxxxxx> # erofs > Signed-off-by: Gao Xiang <hsiangkao@xxxxxxxxxxxxxxxxx> > --- > fs/iomap/buffered-io.c | 34 +++++++++++++++++++++++----------- > fs/iomap/direct-io.c | 11 ++++++----- > include/linux/iomap.h | 22 +++++++++++++++++++++- > 3 files changed, 50 insertions(+), 17 deletions(-) > > diff --git a/fs/iomap/buffered-io.c b/fs/iomap/buffered-io.c > index 87ccb3438bec..334bf98fdd4a 100644 > --- a/fs/iomap/buffered-io.c > +++ b/fs/iomap/buffered-io.c > @@ -205,25 +205,29 @@ struct iomap_readpage_ctx { > struct readahead_control *rac; > }; > > -static void > -iomap_read_inline_data(struct inode *inode, struct page *page, > +static int iomap_read_inline_data(struct inode *inode, struct page *page, > struct iomap *iomap) > { > - size_t size = i_size_read(inode); > + size_t size = i_size_read(inode) - iomap->offset; I wonder why you use i_size / iomap->offset here, and why you completely ignoring iomap->length field returning by fs. Using i_size here instead of iomap->length seems coupling to me in the beginning (even currently in practice there is some limitation.) Thanks, Gao Xiang