On Tue, Mar 08, 2022, Paolo Bonzini wrote: > On 3/8/22 17:16, Sean Christopherson wrote: > > > > > +static inline unsigned long kvm_mmu_get_guest_pgd(struct kvm_vcpu *vcpu, struct kvm_mmu *mmu) > > Wrap the params, no reason to make this line so long. > > > > > +{ > > > +#ifdef CONFIG_RETPOLINE > > > + if (mmu->get_guest_pgd == kvm_get_guest_cr3) > > > + return kvm_read_cr3(vcpu); > > This is unnecessarily fragile and confusing at first glance. Compilers are smart > > enough to generate a non-inline version of functions if they're used for function > > pointers, while still inlining where appropriate. In other words, just drop > > kvm_get_guest_cr3() entirely, a al get_pdptr => kvm_pdptr_read(). > > Unfortunately this isn't entirely true. The function pointer will not match > between compilation units, in this case between the one that calls > kvm_mmu_get_guest_pgd and the one that assigned kvm_read_cr3 to the function > pointer. Ooh, that's a nasty gotcha. And that's why your v1 used a NULL entry as a sentinel for rerouting to kvm_read_cr3(). Hrm, I'm torn between disliking the NULL behavior and disliking the subtle redirect :-) Aha! An idea that would provide line of sight to avoiding retpoline in all cases once we use static_call() for nested_ops, which I really want to do... Drop the mmu hook entirely and replace it with: static inline kvm_mmu_get_guest_pgd(struct kvm_vcpu *vcpu) { if (!mmu_is_nested(vcpu)) return kvm_read_cr3(vcpu); else return kvm_x86_ops.nested_ops->get_guest_pgd(vcpu); }