Re: [PATCH V2 03/11] perf/x86: Add support for TSC in nanoseconds as a perf event clock

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 25/04/22 12:32, Thomas Gleixner wrote:
> On Mon, Apr 25 2022 at 08:30, Adrian Hunter wrote:
>> On 14/03/22 13:50, Adrian Hunter wrote:
>>>> TSC offsetting may also be a problem. The VMCS TSC offset must be discoverable by the
>>>> guest. This can be done via TSC_ADJUST MSR. The offset in the VMCS and the guest
>>>> TSC_ADJUST MSR must always be equivalent, i.e. a write to TSC_ADJUST in the guest
>>>> must be reflected in the VMCS and any changes to the offset in the VMCS must be
>>>> reflected in the TSC_ADJUST MSR. Otherwise a para-virtualized method must
>>>> be invented to communicate an arbitrary VMCS TSC offset to the guest.
>>>>
>>>
>>> In my view it is reasonable for perf to support TSC as a perf clock in any case
>>> because:
>>> 	a) it allows users to work entirely with TSC if they wish
>>> 	b) other kernel performance / debug facilities like ftrace already support TSC
>>> 	c) the patches to add TSC support are relatively small and straight-forward
>>>
>>> May we have support for TSC as a perf event clock?
>>
>> Any update on this?
> 
> If TSC is reliable on the host, then there is absolutely no reason not
> to use it in the guest all over the place. And that is independent of
> exposing ART to the guest.
> 
> So why do we need extra solutions for PT and perf, ftrace and whatever?
> 
> Can we just fix the underlying problem and make the hypervisor tell the
> guest that TSC is stable, reliable and good to use?
> 
> Then everything else just falls into place and using TSC is a
> substantial performance gain in general. Just look at the VDSO
> implementation of __arch_get_hw_counter() -> vread_pvclock():
> 
> Instead of just reading the TSC, this needs to take a nested seqcount,
> read TSC and do yet another mult/shift, which makes clock_gettime() ~20%
> slower than necessary.
> 
> It's hillarious, that we still cling to this pvclock abomination, while
> we happily expose TSC deadline timer to the guest. TSC virt scaling was
> implemented in hardware for a reason.

So you are talking about changing VMX TCS Offset on every VM-Entry to try to hide
the time jumps when the VM is scheduled out?  Or neglect that and just let the time
jumps happen?

If changing VMX TCS Offset, how can TSC be kept consistent between each VCPU i.e.
wouldn't that mean each VCPU has to have the same VMX TSC Offset?



[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]

  Powered by Linux