On 5/12/2022 9:18 PM, Paolo Bonzini wrote:
On 5/11/22 09:43, Yang, Weijiang wrote:
Instead of using flip_arch_lbr_ctl, SMM should save the value of the MSR
in kvm_x86_ops->enter_smm, and restore it in kvm_x86_ops->leave_smm
(feel free to do it only if guest_cpuid_has(vcpu, X86_FEATURE_LM), i.e.
the 32-bit case can be ignored).
In the case of migration in SMM, I assume kvm_x86_ops->enter_smm()
called in source side
and kvm_x86_ops->leave_smm() is called at destination, then should the
saved LBREn be transferred
to destination too? The destination can rely on the bit to defer setting
LBREn bit in
Hi, it's transferred automatically if the MSR is saved in the SMM save
state area. Both enter_smm and leave_smm can access the save state area.
The enter_smm callback is called after saving "normal" state, and it has
to save the state + clear the bit; likewise, the leave_smm callback is
called before saving "normal" state and will restore the old value of
the MSR.
Got it thanks!
But there's no such slot for MSR_ARCH_LBR_CTL in SMRAM, do you still suggest
using this mechanism to implement the LBREn clear/restore logic?
Thanks,
Paolo
VMCS until kvm_x86_ops->leave_smm() is called. is it good? thanks!