On Tue, Nov 7, 2023 at 2:39 AM Matthew Wilcox (Oracle) wrote: > > Using the new folio APIs saves seven hidden calls to compound_head(). > > Signed-off-by: Matthew Wilcox (Oracle) <willy@xxxxxxxxxxxxx> > --- > fs/nilfs2/file.c | 28 +++++++++++++++------------- > 1 file changed, 15 insertions(+), 13 deletions(-) I'm still in the middle of reviewing this series, but I had one question in a relevant part outside of this patch, so I'd like to ask you a question. In block_page_mkwrite() that nilfs_page_mkwrite() calls, __block_write_begin_int() was called with the range using folio_size(), as shown below: end = folio_size(folio); /* folio is wholly or partially inside EOF */ if (folio_pos(folio) + end > size) end = size - folio_pos(folio); ret = __block_write_begin_int(folio, 0, end, get_block, NULL); ... On the other hand, __block_write_begin_int() takes a folio as an argument, but uses a PAGE_SIZE-based remainder calculation and BUG_ON checks: int __block_write_begin_int(struct folio *folio, loff_t pos, unsigned len, get_block_t *get_block, const struct iomap *iomap) { unsigned from = pos & (PAGE_SIZE - 1); unsigned to = from + len; ... BUG_ON(from > PAGE_SIZE); BUG_ON(to > PAGE_SIZE); ... So, it looks like this function causes a kernel BUG if it's called from block_page_mkwrite() and folio_size() exceeds PAGE_SIZE. Is this constraint intentional or temporary in folio conversions ? Regards, Ryusuke Konishi > > diff --git a/fs/nilfs2/file.c b/fs/nilfs2/file.c > index 740ce26d1e76..bec33b89a075 100644 > --- a/fs/nilfs2/file.c > +++ b/fs/nilfs2/file.c > @@ -45,34 +45,36 @@ int nilfs_sync_file(struct file *file, loff_t start, loff_t end, int datasync) > static vm_fault_t nilfs_page_mkwrite(struct vm_fault *vmf) > { > struct vm_area_struct *vma = vmf->vma; > - struct page *page = vmf->page; > + struct folio *folio = page_folio(vmf->page); > struct inode *inode = file_inode(vma->vm_file); > struct nilfs_transaction_info ti; > + struct buffer_head *bh, *head; > int ret = 0; > > if (unlikely(nilfs_near_disk_full(inode->i_sb->s_fs_info))) > return VM_FAULT_SIGBUS; /* -ENOSPC */ > > sb_start_pagefault(inode->i_sb); > - lock_page(page); > - if (page->mapping != inode->i_mapping || > - page_offset(page) >= i_size_read(inode) || !PageUptodate(page)) { > - unlock_page(page); > + folio_lock(folio); > + if (folio->mapping != inode->i_mapping || > + folio_pos(folio) >= i_size_read(inode) || > + !folio_test_uptodate(folio)) { > + folio_unlock(folio); > ret = -EFAULT; /* make the VM retry the fault */ > goto out; > } > > /* > - * check to see if the page is mapped already (no holes) > + * check to see if the folio is mapped already (no holes) > */ > - if (PageMappedToDisk(page)) > + if (folio_test_mappedtodisk(folio)) > goto mapped; > > - if (page_has_buffers(page)) { > - struct buffer_head *bh, *head; > + head = folio_buffers(folio); > + if (head) { > int fully_mapped = 1; > > - bh = head = page_buffers(page); > + bh = head; > do { > if (!buffer_mapped(bh)) { > fully_mapped = 0; > @@ -81,11 +83,11 @@ static vm_fault_t nilfs_page_mkwrite(struct vm_fault *vmf) > } while (bh = bh->b_this_page, bh != head); > > if (fully_mapped) { > - SetPageMappedToDisk(page); > + folio_set_mappedtodisk(folio); > goto mapped; > } > } > - unlock_page(page); > + folio_unlock(folio); > > /* > * fill hole blocks > @@ -105,7 +107,7 @@ static vm_fault_t nilfs_page_mkwrite(struct vm_fault *vmf) > nilfs_transaction_commit(inode->i_sb); > > mapped: > - wait_for_stable_page(page); > + folio_wait_stable(folio); > out: > sb_end_pagefault(inode->i_sb); > return vmf_fs_error(ret); > -- > 2.42.0 >