On Mon, 2023-08-28 at 09:19 +0800, Xiubo Li wrote: > On 8/26/23 11:00, Matthew Wilcox wrote: > > On Fri, Aug 25, 2023 at 09:12:19PM +0100, Matthew Wilcox (Oracle) wrote: > > > +++ b/fs/ceph/addr.c > > > @@ -1608,29 +1608,30 @@ static vm_fault_t ceph_filemap_fault(struct vm_fault *vmf) > > > ret = VM_FAULT_SIGBUS; > > > } else { > > > struct address_space *mapping = inode->i_mapping; > > > - struct page *page; > > > + struct folio *folio; > > > > > > filemap_invalidate_lock_shared(mapping); > > > - page = find_or_create_page(mapping, 0, > > > + folio = __filemap_get_folio(mapping, 0, > > > + FGP_LOCK|FGP_ACCESSED|FGP_CREAT, > > > mapping_gfp_constraint(mapping, ~__GFP_FS)); > > > - if (!page) { > > > + if (!folio) { > > This needs to be "if (IS_ERR(folio))". Meant to fix that but forgot. > > > Hi Matthew, > > Next time please rebase to the latest ceph-client latest upstream > 'testing' branch, we need to test this series by using the qa > teuthology, which is running based on the 'testing' branch. > People working on wide-scale changes to the kernel really shouldn't have to go hunting down random branches to base their changes on. That's the purpose of linux-next. This is an ongoing problem with ceph maintenance -- patches sit in the "testing" branch that doesn't go into linux-next. Anyone who wants to work on patches vs. linux-next that touch ceph runs the risk of developing against outdated code. The rationale for this (at least at one time) was a fear of breaking linux-next, but that its purpose. If there are problems, we want to know early! As long as you don't introduce build breaks, anything you shovel into next is unlikely to be a problematic. There aren't that many people doing ceph testing with linux-next, so the risk of breaking things is pretty low, at least with patches that only touch ceph code. You do need to be a bit more careful with patches that touch common code, but those are pretty rare in the ceph tree. Please change this! -- Jeff Layton <jlayton@xxxxxxxxxx>