On Tue, Mar 08, 2022, Paolo Bonzini wrote: > On 3/8/22 17:32, Sean Christopherson wrote: > > > > Aha! An idea that would provide line of sight to avoiding retpoline in all cases > > once we use static_call() for nested_ops, which I really want to do... Drop the > > mmu hook entirely and replace it with: > > > > static inline kvm_mmu_get_guest_pgd(struct kvm_vcpu *vcpu) > > { > > if (!mmu_is_nested(vcpu)) > > return kvm_read_cr3(vcpu); > > else > > return kvm_x86_ops.nested_ops->get_guest_pgd(vcpu); > > } > > Makes sense, but I think you mean > > static inline unsigned long kvm_mmu_get_guest_pgd(struct kvm_vcpu *vcpu, > struct kvm_mmu *mmu) > { > if (unlikely(vcpu == &vcpu->arch.guest_mmu)) Well, not that certainly :-) if (mmu == &vcpu->arch.guest_mmu) But you're right, we need to be able to do kvm_read_cr3() for the actual nested_mmu. > return kvm_x86_ops.nested_ops->get_guest_pgd(vcpu); > else > return kvm_read_cr3(vcpu); > } > > ?