On Mon, Nov 08, 2021 at 04:05:42AM +0000, Matthew Wilcox (Oracle) wrote: > The zero iterator can work in folio-sized chunks instead of page-sized > chunks. This will save a lot of page cache lookups if the file is cached > in multi-page folios. > > Signed-off-by: Matthew Wilcox (Oracle) <willy@xxxxxxxxxxxxx> hch's dax decoupling series notwithstanding, Though TBH I am kinda wondering how the two of you plan to resolve those kinds of differences -- I haven't looked at that series, though I think this one's been waiting in the wings for longer? Heck, I wonder how Matthew plans to merge all this given that it touches mm, fs, block, and iomap...? Reviewed-by: Darrick J. Wong <djwong@xxxxxxxxxx> --D > --- > fs/iomap/buffered-io.c | 13 ++++++++----- > 1 file changed, 8 insertions(+), 5 deletions(-) > > diff --git a/fs/iomap/buffered-io.c b/fs/iomap/buffered-io.c > index 64e54981b651..9c61d12028ca 100644 > --- a/fs/iomap/buffered-io.c > +++ b/fs/iomap/buffered-io.c > @@ -881,17 +881,20 @@ EXPORT_SYMBOL_GPL(iomap_file_unshare); > > static s64 __iomap_zero_iter(struct iomap_iter *iter, loff_t pos, u64 length) > { > + struct folio *folio; > struct page *page; > int status; > - unsigned offset = offset_in_page(pos); > - unsigned bytes = min_t(u64, PAGE_SIZE - offset, length); > + size_t offset, bytes; > > - status = iomap_write_begin(iter, pos, bytes, &page); > + status = iomap_write_begin(iter, pos, length, &page); > if (status) > return status; > + folio = page_folio(page); > > - zero_user(page, offset, bytes); > - mark_page_accessed(page); > + offset = offset_in_folio(folio, pos); > + bytes = min_t(u64, folio_size(folio) - offset, length); > + folio_zero_range(folio, offset, bytes); > + folio_mark_accessed(folio); > > return iomap_write_end(iter, pos, bytes, bytes, page); > } > -- > 2.33.0 >