On Wed, Jul 07, 2021, Brijesh Singh wrote: > From: Tom Lendacky <thomas.lendacky@xxxxxxx> > > In preparation to support SEV-SNP AP Creation, use a variable that holds > the VMSA physical address rather than converting the virtual address. > This will allow SEV-SNP AP Creation to set the new physical address that > will be used should the vCPU reset path be taken. I'm pretty sure adding vmsa_pa is unnecessary. The next patch sets svm->vmsa_pa and vmcb->control.vmsa_pa as a pair. And for the existing code, my proposed patch to emulate INIT on shutdown would eliminate the one path that zeros the VMCB[1]. That series patch also drops the init_vmcb() in svm_create_vcpu()[2]. Assuming there are no VMCB shenanigans I'm missing, sev_es_init_vmcb() can do if (!init_event) svm->vmcb->control.vmsa_pa = __pa(svm->vmsa); And while I'm thinking of it, the next patch should ideally free svm->vmsa when the the guest configures a new VMSA for the vCPU. [1] https://lkml.kernel.org/r/20210713163324.627647-45-seanjc@xxxxxxxxxx [2] https://lkml.kernel.org/r/20210713163324.627647-10-seanjc@xxxxxxxxxx > Signed-off-by: Tom Lendacky <thomas.lendacky@xxxxxxx> > Signed-off-by: Brijesh Singh <brijesh.singh@xxxxxxx> > --- > arch/x86/kvm/svm/sev.c | 5 ++--- > arch/x86/kvm/svm/svm.c | 9 ++++++++- > arch/x86/kvm/svm/svm.h | 1 + > 3 files changed, 11 insertions(+), 4 deletions(-) > > diff --git a/arch/x86/kvm/svm/sev.c b/arch/x86/kvm/svm/sev.c > index 4cb4c1d7e444..d8ad6dd58c87 100644 > --- a/arch/x86/kvm/svm/sev.c > +++ b/arch/x86/kvm/svm/sev.c > @@ -3553,10 +3553,9 @@ void sev_es_init_vmcb(struct vcpu_svm *svm) > > /* > * An SEV-ES guest requires a VMSA area that is a separate from the > - * VMCB page. Do not include the encryption mask on the VMSA physical > - * address since hardware will access it using the guest key. > + * VMCB page. > */ > - svm->vmcb->control.vmsa_pa = __pa(svm->vmsa); > + svm->vmcb->control.vmsa_pa = svm->vmsa_pa; > > /* Can't intercept CR register access, HV can't modify CR registers */ > svm_clr_intercept(svm, INTERCEPT_CR0_READ); > diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c > index 32e35d396508..74bc635c9608 100644 > --- a/arch/x86/kvm/svm/svm.c > +++ b/arch/x86/kvm/svm/svm.c > @@ -1379,9 +1379,16 @@ static int svm_create_vcpu(struct kvm_vcpu *vcpu) > svm->vmcb01.ptr = page_address(vmcb01_page); > svm->vmcb01.pa = __sme_set(page_to_pfn(vmcb01_page) << PAGE_SHIFT); > > - if (vmsa_page) > + if (vmsa_page) { > svm->vmsa = page_address(vmsa_page); > > + /* > + * Do not include the encryption mask on the VMSA physical > + * address since hardware will access it using the guest key. > + */ > + svm->vmsa_pa = __pa(svm->vmsa); > + } > + > svm->guest_state_loaded = false; > > svm_switch_vmcb(svm, &svm->vmcb01); > diff --git a/arch/x86/kvm/svm/svm.h b/arch/x86/kvm/svm/svm.h > index 9fcfc0a51737..285d9b97b4d2 100644 > --- a/arch/x86/kvm/svm/svm.h > +++ b/arch/x86/kvm/svm/svm.h > @@ -177,6 +177,7 @@ struct vcpu_svm { > > /* SEV-ES support */ > struct sev_es_save_area *vmsa; > + hpa_t vmsa_pa; > struct ghcb *ghcb; > struct kvm_host_map ghcb_map; > bool received_first_sipi; > -- > 2.17.1 >