On Wed, Mar 01, 2023 at 09:26:27AM -0800, Linus Torvalds wrote: > On Wed, Mar 1, 2023 at 9:10 AM Matthew Wilcox (Oracle) > <willy@xxxxxxxxxxxxx> wrote: > > > > + memcpy_to_file_folio(folio, 0, vip->vii_immed.vi_immed, isize); > > Well, this function doesn't exist upstream yet, much less in any > stable kernels.. Oops, see other email. > And while I'm on this subject: the "memcpy_from_file_folio()" > interface is horrendously broken. For the highmem case, it shouldn't > be an inline function, and it should loop over pages - instead of > leaving the callers having to do that. > > Of course, callers don't actually do that (since there are no callers > - unless I'm again missing it due to some macro games with token > concatenation), but I really wish it was fixed before any callers > appear. The first caller was pagecache_read() in fs/ext4/verity.c. I think that's waiting for next merge window. It does make sense in that context -- the caller genuinely wants to loop over multiple folios because its read may cross folio boundaries, so it makes sense. It actually reduces how much calculation the caller does: while (count) { - size_t n = min_t(size_t, count, - PAGE_SIZE - offset_in_page(pos)); - struct page *page; + struct folio *folio; + size_t n; - page = read_mapping_page(inode->i_mapping, pos >> PAGE_SHIFT, + folio = read_mapping_folio(inode->i_mapping, pos >> PAGE_SHIFT, NULL); - if (IS_ERR(page)) - return PTR_ERR(page); - - memcpy_from_page(buf, page, offset_in_page(pos), n); + if (IS_ERR(folio)) + return PTR_ERR(folio); - put_page(page); + n = memcpy_from_file_folio(buf, folio, pos, count); + folio_put(folio); buf += n; pos += n; Are there other callers which only handle a single folio and really wish that memcpy_from_file_folio() handled the loop internally? Probably. I don't have a good survey yet.