On 2/18/25 19:27, Sean Christopherson wrote: > Mark the VMCB dirty, i.e. zero control.clean, prior to handling the new > VMSA. Nothing in the VALID_PAGE() case touches control.clean, and > isolating the VALID_PAGE() code will allow simplifying the overall logic. > > Note, the VMCB probably doesn't need to be marked dirty when the VMSA is > invalid, as KVM will disallow running the vCPU in such a state. But it > also doesn't hurt anything. Reviewed-by: Tom Lendacky <thomas.lendacky@xxxxxxx> > > Signed-off-by: Sean Christopherson <seanjc@xxxxxxxxxx> > --- > arch/x86/kvm/svm/sev.c | 12 ++++++------ > 1 file changed, 6 insertions(+), 6 deletions(-) > > diff --git a/arch/x86/kvm/svm/sev.c b/arch/x86/kvm/svm/sev.c > index 241cf7769508..3a531232c3a1 100644 > --- a/arch/x86/kvm/svm/sev.c > +++ b/arch/x86/kvm/svm/sev.c > @@ -3852,6 +3852,12 @@ static int __sev_snp_update_protected_guest_state(struct kvm_vcpu *vcpu) > /* Clear use of the VMSA */ > svm->vmcb->control.vmsa_pa = INVALID_PAGE; > > + /* > + * When replacing the VMSA during SEV-SNP AP creation, > + * mark the VMCB dirty so that full state is always reloaded. > + */ > + vmcb_mark_all_dirty(svm->vmcb); > + > if (VALID_PAGE(svm->sev_es.snp_vmsa_gpa)) { > gfn_t gfn = gpa_to_gfn(svm->sev_es.snp_vmsa_gpa); > struct kvm_memory_slot *slot; > @@ -3897,12 +3903,6 @@ static int __sev_snp_update_protected_guest_state(struct kvm_vcpu *vcpu) > kvm_release_page_clean(page); > } > > - /* > - * When replacing the VMSA during SEV-SNP AP creation, > - * mark the VMCB dirty so that full state is always reloaded. > - */ > - vmcb_mark_all_dirty(svm->vmcb); > - > return 0; > } >