Jim, Paolo, Do you guys want me to take care of this? Let me know. Thanks Babu > -----Original Message----- > From: gregkh@xxxxxxxxxxxxxxxxxxx <gregkh@xxxxxxxxxxxxxxxxxxx> > Sent: Monday, May 18, 2020 8:40 AM > To: Moger, Babu <Babu.Moger@xxxxxxx>; jmattson@xxxxxxxxxx; > pbonzini@xxxxxxxxxx > Cc: stable@xxxxxxxxxxxxxxx > Subject: FAILED: patch "[PATCH] KVM: x86: Fix pkru save/restore when guest > CR4.PKE=0, move it" failed to apply to 5.4-stable tree > > > The patch below does not apply to the 5.4-stable tree. > If someone wants it applied there, or to any other stable or longterm > tree, then please email the backport, including the original git commit > id to <stable@xxxxxxxxxxxxxxx>. > > thanks, > > greg k-h > > ------------------ original commit in Linus's tree ------------------ > > From 37486135d3a7b03acc7755b63627a130437f066a Mon Sep 17 00:00:00 > 2001 > From: Babu Moger <babu.moger@xxxxxxx> > Date: Tue, 12 May 2020 18:59:06 -0500 > Subject: [PATCH] KVM: x86: Fix pkru save/restore when guest CR4.PKE=0, move > it > to x86.c > > Though rdpkru and wrpkru are contingent upon CR4.PKE, the PKRU > resource isn't. It can be read with XSAVE and written with XRSTOR. > So, if we don't set the guest PKRU value here(kvm_load_guest_xsave_state), > the guest can read the host value. > > In case of kvm_load_host_xsave_state, guest with CR4.PKE clear could > potentially use XRSTOR to change the host PKRU value. > > While at it, move pkru state save/restore to common code and the > host_pkru field to kvm_vcpu_arch. This will let SVM support protection keys. > > Cc: stable@xxxxxxxxxxxxxxx > Reported-by: Jim Mattson <jmattson@xxxxxxxxxx> > Signed-off-by: Babu Moger <babu.moger@xxxxxxx> > Message-Id: <158932794619.44260.14508381096663848853.stgit@naples- > babu.amd.com> > Signed-off-by: Paolo Bonzini <pbonzini@xxxxxxxxxx> > > diff --git a/arch/x86/include/asm/kvm_host.h > b/arch/x86/include/asm/kvm_host.h > index 9e8263b1e6fe..0a6b35353fc7 100644 > --- a/arch/x86/include/asm/kvm_host.h > +++ b/arch/x86/include/asm/kvm_host.h > @@ -578,6 +578,7 @@ struct kvm_vcpu_arch { > unsigned long cr4; > unsigned long cr4_guest_owned_bits; > unsigned long cr8; > + u32 host_pkru; > u32 pkru; > u32 hflags; > u64 efer; > diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c > index e45cf89c5821..89c766fad889 100644 > --- a/arch/x86/kvm/vmx/vmx.c > +++ b/arch/x86/kvm/vmx/vmx.c > @@ -1372,7 +1372,6 @@ void vmx_vcpu_load(struct kvm_vcpu *vcpu, int cpu) > > vmx_vcpu_pi_load(vcpu, cpu); > > - vmx->host_pkru = read_pkru(); > vmx->host_debugctlmsr = get_debugctlmsr(); > } > > @@ -6564,11 +6563,6 @@ static void vmx_vcpu_run(struct kvm_vcpu *vcpu) > > kvm_load_guest_xsave_state(vcpu); > > - if (static_cpu_has(X86_FEATURE_PKU) && > - kvm_read_cr4_bits(vcpu, X86_CR4_PKE) && > - vcpu->arch.pkru != vmx->host_pkru) > - __write_pkru(vcpu->arch.pkru); > - > pt_guest_enter(vmx); > > if (vcpu_to_pmu(vcpu)->version) > @@ -6658,18 +6652,6 @@ static void vmx_vcpu_run(struct kvm_vcpu *vcpu) > > pt_guest_exit(vmx); > > - /* > - * eager fpu is enabled if PKEY is supported and CR4 is switched > - * back on host, so it is safe to read guest PKRU from current > - * XSAVE. > - */ > - if (static_cpu_has(X86_FEATURE_PKU) && > - kvm_read_cr4_bits(vcpu, X86_CR4_PKE)) { > - vcpu->arch.pkru = rdpkru(); > - if (vcpu->arch.pkru != vmx->host_pkru) > - __write_pkru(vmx->host_pkru); > - } > - > kvm_load_host_xsave_state(vcpu); > > vmx->nested.nested_run_pending = 0; > diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c > index 98176b80c481..d11eba8b85c6 100644 > --- a/arch/x86/kvm/x86.c > +++ b/arch/x86/kvm/x86.c > @@ -837,11 +837,25 @@ void kvm_load_guest_xsave_state(struct kvm_vcpu > *vcpu) > vcpu->arch.ia32_xss != host_xss) > wrmsrl(MSR_IA32_XSS, vcpu->arch.ia32_xss); > } > + > + if (static_cpu_has(X86_FEATURE_PKU) && > + (kvm_read_cr4_bits(vcpu, X86_CR4_PKE) || > + (vcpu->arch.xcr0 & XFEATURE_MASK_PKRU)) && > + vcpu->arch.pkru != vcpu->arch.host_pkru) > + __write_pkru(vcpu->arch.pkru); > } > EXPORT_SYMBOL_GPL(kvm_load_guest_xsave_state); > > void kvm_load_host_xsave_state(struct kvm_vcpu *vcpu) > { > + if (static_cpu_has(X86_FEATURE_PKU) && > + (kvm_read_cr4_bits(vcpu, X86_CR4_PKE) || > + (vcpu->arch.xcr0 & XFEATURE_MASK_PKRU))) { > + vcpu->arch.pkru = rdpkru(); > + if (vcpu->arch.pkru != vcpu->arch.host_pkru) > + __write_pkru(vcpu->arch.host_pkru); > + } > + > if (kvm_read_cr4_bits(vcpu, X86_CR4_OSXSAVE)) { > > if (vcpu->arch.xcr0 != host_xcr0) > @@ -3549,6 +3563,9 @@ void kvm_arch_vcpu_load(struct kvm_vcpu *vcpu, int > cpu) > > kvm_x86_ops.vcpu_load(vcpu, cpu); > > + /* Save host pkru register if supported */ > + vcpu->arch.host_pkru = read_pkru(); > + > /* Apply any externally detected TSC adjustments (due to suspend) */ > if (unlikely(vcpu->arch.tsc_offset_adjustment)) { > adjust_tsc_offset_host(vcpu, vcpu- > >arch.tsc_offset_adjustment);