On Fri, Sep 6, 2024 at 7:12 AM Andrii Nakryiko <andrii@xxxxxxxxxx> wrote: > Given filp_cachep is already marked SLAB_TYPESAFE_BY_RCU, we can safely > access vma->vm_file->f_inode field locklessly under just rcu_read_lock() No, not every file is SLAB_TYPESAFE_BY_RCU - see for example ovl_mmap(), which uses backing_file_mmap(), which does vma_set_file(vma, file) where "file" comes from ovl_mmap()'s "realfile", which comes from file->private_data, which is set in ovl_open() to the return value of ovl_open_realfile(), which comes from backing_file_open(), which allocates a file with alloc_empty_backing_file(), which uses a normal kzalloc() without any RCU stuff, with this comment: * This is only for kernel internal use, and the allocate file must not be * installed into file tables or such. And when a backing_file is freed, you can see on the path __fput() -> file_free() that files with FMODE_BACKING are directly freed with kfree(), no RCU delay. So the RCU-ness of "struct file" is an implementation detail of the VFS, and you can't rely on it for ->vm_file unless you get the VFS to change how backing file lifetimes work, which might slow down some other workload, or you find a way to figure out whether you're dealing with a backing file without actually accessing the file. > +static struct uprobe *find_active_uprobe_speculative(unsigned long bp_vaddr) > +{ > + const vm_flags_t flags = VM_HUGETLB | VM_MAYEXEC | VM_MAYSHARE; > + struct mm_struct *mm = current->mm; > + struct uprobe *uprobe; > + struct vm_area_struct *vma; > + struct file *vm_file; > + struct inode *vm_inode; > + unsigned long vm_pgoff, vm_start; > + int seq; > + loff_t offset; > + > + if (!mmap_lock_speculation_start(mm, &seq)) > + return NULL; > + > + rcu_read_lock(); > + > + vma = vma_lookup(mm, bp_vaddr); > + if (!vma) > + goto bail; > + > + vm_file = data_race(vma->vm_file); A plain "data_race()" says "I'm fine with this load tearing", but you're relying on this load not tearing (since you access the vm_file pointer below). You're also relying on the "struct file" that vma->vm_file points to being populated at this point, which means you need CONSUME semantics here, which READ_ONCE() will give you, and something like RELEASE semantics on any pairing store that populates vma->vm_file, which means they'd all have to become something like smp_store_release()). You might want to instead add another recheck of the sequence count (which would involve at least a read memory barrier after the preceding patch is fixed) after loading the ->vm_file pointer to ensure that no one was concurrently changing the ->vm_file pointer before you do memory accesses through it. > + if (!vm_file || (vma->vm_flags & flags) != VM_MAYEXEC) > + goto bail; missing data_race() annotation on the vma->vm_flags access > + vm_inode = data_race(vm_file->f_inode); As noted above, this doesn't work because you can't rely on having RCU lifetime for the file. One *very* ugly hack you could do, if you think this code is so performance-sensitive that you're willing to do fairly atrocious things here, would be to do a "yes I am intentionally doing a UAF read and I know the address might not even be mapped at this point, it's fine, trust me" pattern, where you use copy_from_kernel_nofault(), kind of like in prepend_copy() in fs/d_path.c, and then immediately recheck the sequence count before doing *anything* with this vm_inode pointer you just loaded. > + vm_pgoff = data_race(vma->vm_pgoff); > + vm_start = data_race(vma->vm_start); > + > + offset = (loff_t)(vm_pgoff << PAGE_SHIFT) + (bp_vaddr - vm_start); > + uprobe = find_uprobe_rcu(vm_inode, offset); > + if (!uprobe) > + goto bail; > + > + /* now double check that nothing about MM changed */ > + if (!mmap_lock_speculation_end(mm, seq)) > + goto bail; > + > + rcu_read_unlock(); > + > + /* happy case, we speculated successfully */ > + return uprobe; > +bail: > + rcu_read_unlock(); > + return NULL; > +}