On Thu, Jan 14, 2016 at 08:15:57PM +0100, Tomeu Vizoso wrote: > On 14 January 2016 at 18:13, Al Viro <viro@xxxxxxxxxxxxxxxxxx> wrote: > > On Thu, Jan 14, 2016 at 05:57:42PM +0100, Tomeu Vizoso wrote: > >> Here it is: > >> > >> [ 170.715356] inode: ec8c30b0, pages: 1 > >> [ 170.719014] page_address: (null) > >> > >> https://lava.collabora.co.uk/scheduler/job/127698/log_file > > > > Lovely... And that looks like the first time that inode hits > > nfs_get_link(). Ho-hum... > > > > Could you add WARN_ON(inode->i_mapping.nrpages) in inode_nohighmem() > > and see if that triggers? It really shouldn't (we hit it after iget5_locked() > > Indeed :( > > https://lava.collabora.co.uk/scheduler/job/127782/log_file OK... Unless I'm misreading that, we have * inode->i_data.flags set to GFP_USER, with no pages present in there. * at some later point nfs_get_link() is called on that inode (for the first time) and sees a page with logical offset 0 already present in there, that page being a highmem one. That would certainly suffice for the things to blow up... Let's try this: in the beginning of __add_to_page_cache_locked() add VM_BUG_ON_PAGE(PageHighMem(page) & !(mapping->flags & __GFP_HIGHMEM), page); and see if that triggers. <pokes around> Arrrgh. Try this: diff --git a/fs/nfs/dir.c b/fs/nfs/dir.c index ce5a218..8a05309 100644 --- a/fs/nfs/dir.c +++ b/fs/nfs/dir.c @@ -1894,15 +1894,14 @@ int nfs_symlink(struct inode *dir, struct dentry *dentry, const char *symname) attr.ia_mode = S_IFLNK | S_IRWXUGO; attr.ia_valid = ATTR_MODE; - page = alloc_page(GFP_HIGHUSER); + page = alloc_page(GFP_USER); if (!page) return -ENOMEM; - kaddr = kmap_atomic(page); + kaddr = page_address(page); memcpy(kaddr, symname, pathlen); if (pathlen < PAGE_SIZE) memset(kaddr + pathlen, 0, PAGE_SIZE - pathlen); - kunmap_atomic(kaddr); trace_nfs_symlink_enter(dir, dentry); error = NFS_PROTO(dir)->symlink(dir, dentry, page, pathlen, &attr); -- To unsubscribe from this list: send the line "unsubscribe linux-fsdevel" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html