On Fri, Jul 26, 2024 at 04:52:06PM GMT, Sean Christopherson wrote: > Mark pages accessed before dropping mmu_lock when faulting in guest memory > so that RISC-V can convert to kvm_release_faultin_page() without tripping > its lockdep assertion on mmu_lock being held. Marking pages accessed > outside of mmu_lock is ok (not great, but safe), but marking pages _dirty_ > outside of mmu_lock can make filesystems unhappy. > > Signed-off-by: Sean Christopherson <seanjc@xxxxxxxxxx> > --- > arch/riscv/kvm/mmu.c | 4 ++-- > 1 file changed, 2 insertions(+), 2 deletions(-) > > diff --git a/arch/riscv/kvm/mmu.c b/arch/riscv/kvm/mmu.c > index 06aa5a0d056d..806f68e70642 100644 > --- a/arch/riscv/kvm/mmu.c > +++ b/arch/riscv/kvm/mmu.c > @@ -683,10 +683,10 @@ int kvm_riscv_gstage_map(struct kvm_vcpu *vcpu, > out_unlock: > if ((!ret || ret == -EEXIST) && writable) > kvm_set_pfn_dirty(hfn); > + else > + kvm_release_pfn_clean(hfn); > > spin_unlock(&kvm->mmu_lock); > - kvm_set_pfn_accessed(hfn); > - kvm_release_pfn_clean(hfn); > return ret; > } > > -- > 2.46.0.rc1.232.g9752f9e123-goog > Reviewed-by: Andrew Jones <ajones@xxxxxxxxxxxxxxxx>