On Fri, Jul 8, 2022 at 1:29 AM Mike Rapoport <rppt@xxxxxxxxxx> wrote: > > On Thu, Jul 07, 2022 at 03:09:32PM -0700, Yang Shi wrote: > > On Thu, Jul 7, 2022 at 1:55 PM Darrick J. Wong <djwong@xxxxxxxxxx> wrote: > > > > > > On Thu, Jul 07, 2022 at 10:48:00AM -0700, Yang Shi wrote: > > > > On Thu, Jul 7, 2022 at 9:57 AM Mike Rapoport <rppt@xxxxxxxxxx> wrote: > > > > > > > > > > Eric Biggers suggested that this happens when > > > > > secretmem_setattr()->simple_setattr() races with secretmem_fault() so > > > > > that a page that is faulted in by secretmem_fault() (and thus removed > > > > > from the direct map) is zeroed by inode truncation right afterwards. > > > > > > > > > > Since do_truncate() takes inode_lock(), adding inode_lock_shared() to > > > > > secretmem_fault() prevents the race. > > > > > > > > Should invalidate_lock be used to serialize between page fault and truncate? > > > > > > I would have thought so, given Documentation/filesystems/locking.rst: > > > > > > "->fault() is called when a previously not present pte is about to be > > > faulted in. The filesystem must find and return the page associated with > > > the passed in "pgoff" in the vm_fault structure. If it is possible that > > > the page may be truncated and/or invalidated, then the filesystem must > > > lock invalidate_lock, then ensure the page is not already truncated > > > (invalidate_lock will block subsequent truncate), and then return with > > > VM_FAULT_LOCKED, and the page locked. The VM will unlock the page." > > > > > > IIRC page faults aren't supposed to take i_rwsem because the fault could > > > be in response to someone mmaping a file into memory and then write()ing > > > to the same file using the mmapped region. The write() takes > > > inode_lock and faults on the buffer, so the fault cannot take inode_lock > > > again. > > > > Do you mean writing from one part of the file to the other part of the > > file so the "from" buffer used by copy_from_user() is part of the > > mmaped region? > > > > Another possible deadlock issue by using inode_lock in page faults is > > mmap_lock is acquired before inode_lock, but write may acquire > > inode_lock before mmap_lock, it is a AB-BA lock pattern, but it should > > not cause real deadlock since mmap_lock is not exclusive for page > > faults. But such pattern should be avoided IMHO. > > > > > That said... I don't think memfd_secret files /can/ be written to? > > memfd_secret files cannot be written to, they can only be mmap()ed. > Synchronization is only required between > do_truncate()->...->simple_setatt() and secretmem->fault() and I don't see > how that can deadlock. Sure, there is no deadlock. > > I'm not an fs expert though, so if you think that invalidate_lock() is > safer, I don't mind s/inode_lock/invalidate_lock/ in the patch. IIUC invalidate_lock should be preferred per the filesystem's locking document. And I found Jan Kara's email of the invalidate_lock patchset, please refer to https://lore.kernel.org/linux-mm/20210715133202.5975-1-jack@xxxxxxx/. > > > > Hard to say, since I can't find a manpage describing what that syscall > > > does. > > Right, I don't see it's published :-/ > > There is a groff version: > https://git.kernel.org/pub/scm/docs/man-pages/man-pages.git/tree/man2/memfd_secret.2 > > > > --D > > > > > > > > > > > > > > > > > Reported-by: syzbot+9bd2b7adbd34b30b87e4@xxxxxxxxxxxxxxxxxxxxxxxxx > > > > > Suggested-by: Eric Biggers <ebiggers@xxxxxxxxxx> > > > > > Signed-off-by: Mike Rapoport <rppt@xxxxxxxxxxxxx> > > > > > --- > > > > > > > > > > v2: use inode_lock_shared() rather than add a new rw_sem to secretmem > > > > > > > > > > Axel, I didn't add your Reviewed-by because v2 is quite different. > > > > > > > > > > mm/secretmem.c | 21 ++++++++++++++++----- > > > > > 1 file changed, 16 insertions(+), 5 deletions(-) > > > > > > > > > > diff --git a/mm/secretmem.c b/mm/secretmem.c > > > > > index 206ed6b40c1d..a4fabf705e4f 100644 > > > > > --- a/mm/secretmem.c > > > > > +++ b/mm/secretmem.c > > > > > @@ -55,22 +55,28 @@ static vm_fault_t secretmem_fault(struct vm_fault *vmf) > > > > > gfp_t gfp = vmf->gfp_mask; > > > > > unsigned long addr; > > > > > struct page *page; > > > > > + vm_fault_t ret; > > > > > int err; > > > > > > > > > > if (((loff_t)vmf->pgoff << PAGE_SHIFT) >= i_size_read(inode)) > > > > > return vmf_error(-EINVAL); > > > > > > > > > > + inode_lock_shared(inode); > > > > > + > > > > > retry: > > > > > page = find_lock_page(mapping, offset); > > > > > if (!page) { > > > > > page = alloc_page(gfp | __GFP_ZERO); > > > > > - if (!page) > > > > > - return VM_FAULT_OOM; > > > > > + if (!page) { > > > > > + ret = VM_FAULT_OOM; > > > > > + goto out; > > > > > + } > > > > > > > > > > err = set_direct_map_invalid_noflush(page); > > > > > if (err) { > > > > > put_page(page); > > > > > - return vmf_error(err); > > > > > + ret = vmf_error(err); > > > > > + goto out; > > > > > } > > > > > > > > > > __SetPageUptodate(page); > > > > > @@ -86,7 +92,8 @@ static vm_fault_t secretmem_fault(struct vm_fault *vmf) > > > > > if (err == -EEXIST) > > > > > goto retry; > > > > > > > > > > - return vmf_error(err); > > > > > + ret = vmf_error(err); > > > > > + goto out; > > > > > } > > > > > > > > > > addr = (unsigned long)page_address(page); > > > > > @@ -94,7 +101,11 @@ static vm_fault_t secretmem_fault(struct vm_fault *vmf) > > > > > } > > > > > > > > > > vmf->page = page; > > > > > - return VM_FAULT_LOCKED; > > > > > + ret = VM_FAULT_LOCKED; > > > > > + > > > > > +out: > > > > > + inode_unlock_shared(inode); > > > > > + return ret; > > > > > } > > > > > > > > > > static const struct vm_operations_struct secretmem_vm_ops = { > > > > > > > > > > base-commit: 03c765b0e3b4cb5063276b086c76f7a612856a9a > > > > > -- > > > > > 2.34.1 > > > > > > > > > > > > -- > Sincerely yours, > Mike.