On Tue, Jan 09, 2024, Muhammad Usama Anjum wrote: > Move mmu notification mechanism inside mm lock to prevent race condition > in other components which depend on it. The notifier will invalidate > memory range. Depending upon the number of iterations, different memory > ranges would be invalidated. > > The following warning would be removed by this patch: > WARNING: CPU: 0 PID: 5067 at arch/x86/kvm/../../../virt/kvm/kvm_main.c:734 kvm_mmu_notifier_change_pte+0x860/0x960 arch/x86/kvm/../../../virt/kvm/kvm_main.c:734 > > There is no behavioural and performance change with this patch when > there is no component registered with the mmu notifier. > > Fixes: 52526ca7fdb9 ("fs/proc/task_mmu: implement IOCTL to get and optionally clear info about PTEs") > Reported-by: syzbot+81227d2bd69e9dedb802@xxxxxxxxxxxxxxxxxxxxxxxxx > Link: https://lore.kernel.org/all/000000000000f6d051060c6785bc@xxxxxxxxxx/ > Cc: Sean Christopherson <seanjc@xxxxxxxxxx> > Cc: stable@xxxxxxxxxxxxxxx > Signed-off-by: Muhammad Usama Anjum <usama.anjum@xxxxxxxxxxxxx> > --- > fs/proc/task_mmu.c | 22 ++++++++++++---------- > 1 file changed, 12 insertions(+), 10 deletions(-) > > diff --git a/fs/proc/task_mmu.c b/fs/proc/task_mmu.c > index 62b16f42d5d2..56c2e7357494 100644 > --- a/fs/proc/task_mmu.c > +++ b/fs/proc/task_mmu.c > @@ -2448,13 +2448,6 @@ static long do_pagemap_scan(struct mm_struct *mm, unsigned long uarg) > if (ret) > return ret; > > - /* Protection change for the range is going to happen. */ > - if (p.arg.flags & PM_SCAN_WP_MATCHING) { > - mmu_notifier_range_init(&range, MMU_NOTIFY_PROTECTION_VMA, 0, > - mm, p.arg.start, p.arg.end); > - mmu_notifier_invalidate_range_start(&range); > - } > - > for (walk_start = p.arg.start; walk_start < p.arg.end; > walk_start = p.arg.walk_end) { > long n_out; Nit, might be worth moving struct mmu_notifier_range range; inside the loop to guard against stale usage, but that's definitely optional. Reviewed-by: Sean Christopherson <seanjc@xxxxxxxxxx>