On 9/11/23 05:16, David Stevens wrote: > --- a/arch/x86/kvm/mmu/paging_tmpl.h > +++ b/arch/x86/kvm/mmu/paging_tmpl.h > @@ -848,7 +848,8 @@ static int FNAME(page_fault)(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault > > out_unlock: > write_unlock(&vcpu->kvm->mmu_lock); > - kvm_release_pfn_clean(fault->pfn); > + if (fault->is_refcounted_page) > + kvm_set_page_accessed(pfn_to_page(fault->pfn)); The other similar occurrences in the code that replaced kvm_release_pfn_clean() with kvm_set_page_accessed() did it under the held mmu_lock. Does kvm_set_page_accessed() needs to be invoked under the lock? -- Best regards, Dmitry