On Thu, 2022-02-17 at 16:03 -0500, Paolo Bonzini wrote: > For cleanliness, do not leave a stale GVA in the cache after all the roots are > cleared. In practice, kvm_mmu_load will go through kvm_mmu_sync_roots if > paging is on, and will not use vcpu_match_mmio_gva at all if paging is off. > However, leaving data in the cache might cause bugs in the future. > > Signed-off-by: Paolo Bonzini <pbonzini@xxxxxxxxxx> > --- > arch/x86/kvm/mmu/mmu.c | 1 + > 1 file changed, 1 insertion(+) > > diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c > index b01160716c6a..4e8e3e9530ca 100644 > --- a/arch/x86/kvm/mmu/mmu.c > +++ b/arch/x86/kvm/mmu/mmu.c > @@ -5111,6 +5111,7 @@ void kvm_mmu_unload(struct kvm_vcpu *vcpu) > { > __kvm_mmu_unload(vcpu->kvm, &vcpu->arch.root_mmu); > __kvm_mmu_unload(vcpu->kvm, &vcpu->arch.guest_mmu); > + vcpu_clear_mmio_info(vcpu, MMIO_GVA_ANY); > } > > static bool need_remote_flush(u64 old, u64 new) One thing that bothers me for a while with all of this is that vcpu->arch.{mmio_gva|mmio_access|mmio_gfn|mmio_gen} are often called mmio cache, while we also install reserved bit SPTEs and also call this a mmio cache. The above is basically a cache of a cache sort of. Reviewed-by: Maxim Levitsky <mlevitsk@xxxxxxxxxx> Best regards, Maxim Levitsky