On Tue, Jun 21, 2022, Maxim Levitsky wrote: > Use kvm_smram_state_64 struct to save/restore the 64 bit SMM state > (used when X86_FEATURE_LM is present in the guest CPUID, > regardless of 32-bitness of the guest). > > Signed-off-by: Maxim Levitsky <mlevitsk@xxxxxxxxxx> > --- > @@ -9814,7 +9805,7 @@ static void enter_smm(struct kvm_vcpu *vcpu) > memset(buf, 0, 512); > #ifdef CONFIG_X86_64 > if (guest_cpuid_has(vcpu, X86_FEATURE_LM)) > - enter_smm_save_state_64(vcpu, buf); > + enter_smm_save_state_64(vcpu, (struct kvm_smram_state_64 *)buf); > else > #endif > enter_smm_save_state_32(vcpu, (struct kvm_smram_state_32 *)buf); Hrm, I _love_ the approach overall, but I really dislike having to cast an arbitrary buffer, especially in the SVM code. Aha! Rather than keeping a buffer and casting, create a union to hold everything: union kvm_smram { struct kvm_smram_state_64 smram64; struct kvm_smram_state_32 smram32; u8 bytes[512]; }; and then enter_smm() becomes: static void enter_smm(struct kvm_vcpu *vcpu) { struct kvm_segment cs, ds; struct desc_ptr dt; unsigned long cr0; union kvm_smram smram; BUILD_BUG_ON(sizeof(smram) != 512); memset(smram.bytes, 0, sizeof(smram)); #ifdef CONFIG_X86_64 if (guest_cpuid_has(vcpu, X86_FEATURE_LM)) enter_smm_save_state_64(vcpu, &smram.smram64); else #endif enter_smm_save_state_32(vcpu, &smram.smram32); /* * Give enter_smm() a chance to make ISA-specific changes to the vCPU * state (e.g. leave guest mode) after we've saved the state into the * SMM state-save area. */ static_call(kvm_x86_enter_smm)(vcpu, &smram); kvm_smm_changed(vcpu, true); kvm_vcpu_write_guest(vcpu, vcpu->arch.smbase + 0xfe00, smram.bytes, sizeof(smram)); and em_rsm() gets similar treatment. Then the vendor code doesn't have to cast, e.g. SVM can do: smram->smram64.svm_guest_flag = 1; smram->smram64.svm_guest_vmcb_gpa = svm->nested.vmcb12_gpa; That way we don't have to refactor this all again if we want to use SMRAM to save something on Intel for VMX (though I agree with Jim that that's probably a bad idea).