On Wed, May 03, 2023, Kai Huang wrote: > On Wed, 2023-05-03 at 11:28 -0700, Sean Christopherson wrote: > > From: Wenyao Hai <haiwenyao@xxxxxxxxxxxxx> > > > > Open code setting "vcpu->arch.pat" in vmx_set_msr() instead of bouncing > > through kvm_set_msr_common() to get to the same code in kvm_mtrr_set_msr(). > > What's the value of doing so, besides saving a function of kvm_set_msr_common()? To avoid complicating a very simple operation (writing vcpu->arch.pat), and to align with SVM. > PAT change shouldn't be something frequent so shouldn't in a performance > critical path. Given the PAT logic on Intel and AMD are basically the same , > isn't it better to do in kvm_set_msr_common()? I could go either way on calling into kvm_set_msr_common(). I agree that performance isn't a concern. Hmm, and kvm_set_msr_common() still has a case statement for MSR_IA32_CR_PAT, so handling the write fully in vendor code won't impact the code generation for other MSRs. Though I am leaning towards saying we should either handle loads and stores to vcpu->arch.pat in common code _or_ vendor code, i.e. either teach VMX and SVM to handle reads of PAT, or have their write paths call kvm_set_msr_common(). A mix of both is definitely odd. I don't have strong preference on which of those two we choose. I dislike duplicating logic across VMX and SVM, but on the other hands it's so little code. I think I'd vote for handling everything in vendor code, mostly because this gives the appearance that the write can fail, which is silly and misleading. ret = kvm_set_msr_common(vcpu, msr_info); > For instance, given mtrr code is also in common x86, if we ever want to add some > additional logic to, i.e. calculate effective memtype, isn't better to do handle > pat in common code too? FWIW, I highly doubt we'll ever have code like that. The truly effective memtype calculations are too different between Intel and AMD, and doing anything useful with the guest's effective memtype is likely a fool's errand. > > Note, MSR_IA32_CR_PAT is 0x277, and is very subtly handled by > > > > case 0x200 ... MSR_IA32_MC0_CTL2 - 1: > > > > in kvm_set_msr_common(). > > > > Signed-off-by: Wenyao Hai <haiwenyao@xxxxxxxxxxxxx> > > [sean: massage changelog] > > Signed-off-by: Sean Christopherson <seanjc@xxxxxxxxxx> > > --- > > arch/x86/kvm/vmx/vmx.c | 8 +++----- > > 1 file changed, 3 insertions(+), 5 deletions(-) > > > > diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c > > index 44fb619803b8..53e249109483 100644 > > --- a/arch/x86/kvm/vmx/vmx.c > > +++ b/arch/x86/kvm/vmx/vmx.c > > @@ -2294,12 +2294,10 @@ static int vmx_set_msr(struct kvm_vcpu *vcpu, struct msr_data *msr_info) > > get_vmcs12(vcpu)->vm_exit_controls & VM_EXIT_SAVE_IA32_PAT) > > get_vmcs12(vcpu)->guest_ia32_pat = data; > > > > - if (vmcs_config.vmentry_ctrl & VM_ENTRY_LOAD_IA32_PAT) { > > + if (vmcs_config.vmentry_ctrl & VM_ENTRY_LOAD_IA32_PAT) > > vmcs_write64(GUEST_IA32_PAT, data); > > - vcpu->arch.pat = data; > > - break; > > - } > > - ret = kvm_set_msr_common(vcpu, msr_info); > > + > > + vcpu->arch.pat = data; > > break; > > case MSR_IA32_MCG_EXT_CTL: > > if ((!msr_info->host_initiated &&