Re: [PATCH v2] KVM: SEV-ES: Don't intercept MSR_IA32_DEBUGCTLMSR for SEV-ES guests

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 5/22/2024 2:17 AM, Paolo Bonzini wrote:
> On Tue, May 21, 2024 at 10:31 PM Sean Christopherson <seanjc@xxxxxxxxxx> wrote:
>>
>> On Mon, May 20, 2024, Ravi Bangoria wrote:
>>> On 17-May-24 8:01 PM, Sean Christopherson wrote:
>>>> On Fri, May 17, 2024, Ravi Bangoria wrote:
>>>>> On 08-May-24 12:37 AM, Sean Christopherson wrote:
>>>>>> So unless I'm missing something, the only reason to ever disable LBRV would be
>>>>>> for performance reasons.  Indeed the original commits more or less says as much:
>>>>>>
>>>>>>   commit 24e09cbf480a72f9c952af4ca77b159503dca44b
>>>>>>   Author:     Joerg Roedel <joerg.roedel@xxxxxxx>
>>>>>>   AuthorDate: Wed Feb 13 18:58:47 2008 +0100
>>>>>>
>>>>>>     KVM: SVM: enable LBR virtualization
>>>>>>
>>>>>>     This patch implements the Last Branch Record Virtualization (LBRV) feature of
>>>>>>     the AMD Barcelona and Phenom processors into the kvm-amd module. It will only
>>>>>>     be enabled if the guest enables last branch recording in the DEBUG_CTL MSR. So
>>>>>>     there is no increased world switch overhead when the guest doesn't use these
>>>>>>     MSRs.
>>>>>>
>>>>>> but what it _doesn't_ say is what the world switch overhead is when LBRV is
>>>>>> enabled.  If the overhead is small, e.g. 20 cycles?, then I see no reason to
>>>>>> keep the dynamically toggling.
>>>>>>
>>>>>> And if we ditch the dynamic toggling, then this patch is unnecessary to fix the
>>>>>> LBRV issue.  It _is_ necessary to actually let the guest use the LBRs, but that's
>>>>>> a wildly different changelog and justification.
>>>>>
>>>>> The overhead might be less for legacy LBR. But upcoming hw also supports
>>>>> LBR Stack Virtualization[1]. LBR Stack has total 34 MSRs (two control and
>>>>> 16*2 stack). Also, Legacy and Stack LBR virtualization both are controlled
>>>>> through the same VMCB bit. So I think I still need to keep the dynamic
>>>>> toggling for LBR Stack virtualization.
>>>>
>>>> Please get performance number so that we can make an informed decision.  I don't
>>>> want to carry complexity because we _think_ the overhead would be too high.
>>>
>>> LBR Virtualization overhead for guest entry + exit roundtrip is ~450 cycles* on
>>
>> Ouch.  Just to clearify, that's for LBR Stack Virtualization, correct?
> 
> And they are all in the VMSA, triggered by LBR_CTL_ENABLE_MASK, for
> non SEV-ES guests?

Not sure I follow. LBR_CTL_ENABLE_MASK works same for all types of guests.
Just that, it's mandatory for SEV-ES guests and optional for SVM and SEV
guests.

Thanks,
Ravi




[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]

  Powered by Linux