On Thu, Apr 01, 2021, Maxim Levitsky wrote: > if new KVM_*_SREGS2 ioctls are used, the PDPTRs are > part of the migration state and thus are loaded > by those ioctls. > > Signed-off-by: Maxim Levitsky <mlevitsk@xxxxxxxxxx> > --- > arch/x86/kvm/svm/nested.c | 15 +++++++++++++-- > 1 file changed, 13 insertions(+), 2 deletions(-) > > diff --git a/arch/x86/kvm/svm/nested.c b/arch/x86/kvm/svm/nested.c > index ac5e3e17bda4..b94916548cfa 100644 > --- a/arch/x86/kvm/svm/nested.c > +++ b/arch/x86/kvm/svm/nested.c > @@ -373,10 +373,9 @@ static int nested_svm_load_cr3(struct kvm_vcpu *vcpu, unsigned long cr3, > return -EINVAL; > > if (!nested_npt && is_pae_paging(vcpu) && > - (cr3 != kvm_read_cr3(vcpu) || pdptrs_changed(vcpu))) { > + (cr3 != kvm_read_cr3(vcpu) || !kvm_register_is_available(vcpu, VCPU_EXREG_PDPTR))) > if (CC(!load_pdptrs(vcpu, vcpu->arch.walk_mmu, cr3))) What if we ditch the optimizations[*] altogether and just do: if (!nested_npt && is_pae_paging(vcpu) && CC(!load_pdptrs(vcpu, vcpu->arch.walk_mmu, cr3)) return -EINVAL; Won't that obviate the need for KVM_{GET|SET}_SREGS2 since KVM will always load the PDPTRs from memory? IMO, nested migration with shadowing paging doesn't warrant this level of optimization complexity. [*] For some definitions of "optimization", since the extra pdptrs_changed() check in the existing code is likely a net negative. > return -EINVAL; > - } > > /* > * TODO: optimize unconditional TLB flush/MMU sync here and in