On 10/8/2023 1:45 pm, Xiong Zhang wrote:
+4. LBR Virtualization +===================== + +4.1. Overview +------------- + +The guest LBR driver would access the LBR MSR (including IA32_DEBUGCTLMSR +and records MSRs) as host does once KVM/QEMU export vcpu's LBR capability +into guest, The first guest access on LBR related MSRs is always +interceptable. The KVM trap would create a vLBR perf event which enables
intercepted.
+the callstack mode and none of the hardware counters are assigned. The
the LBR callstack mode.
+host perf would enable and schedule this event as usual.
in the absence of contention.
+ +When vLBR event is scheduled by host perf scheduler and is active, host +LBR MSRs are owned by guest and are pass-through into guest, guest will +access them without VM Exit. However, if another host LBR event comes in +and takes over the LBR facility, the vLBR event will be in error state,
Doesn't LBR event have a scheduling priority ?
+and the guest following access to the LBR MSRs will be trapped and +meaningless.
One description missing here is whether KVM retries to create LBR events frequently ?
+ +As kvm perf event, vLBR event will be released when guest doesn't access +LBR-related MSRs within a scheduling time slice and guest unset LBR
Not all LBR-related MSRs, but the DEBUGCTLMSR_LBR bit at now.
+enable bit, then the pass-through state of the LBR MSRs will be canceled. + +4.2. Host and Guest LBR contention +---------------------------------- + +vLBR event is a per-process pinned event, its priority is second. vLBR +event together with host other LBR event to contend LBR resource, +according to perf scheduler rule, when vLBR event is active, it can be +preempted by host per-cpu pinned LBR event, or it can preempt host +flexible LBR event. Such preemption can be temporarily prohibited +through disabling host IRQ as perf scheduler uses IPI to change LBR owner.
Those same descriptions can be shared with counter contention.
+ +The following results are expected when host and guest LBR event coexist: +1) If host per cpu pinned LBR event is active when vm starts, the guest +vLBR event can not preempt the LBR resource, so the guest can not use +LBR.
This is not the same as the current implementation. One could argue that this is expected, but the current state of the system must be described first.
+2). If host flexible LBR events are active when vm starts, guest vLBR +event can preempt LBR, so the guest can use LBR. +3). If host per cpu pinned LBR event becomes enabled when guest vLBR +event is active, the guest vLBR event will lose LBR and the guest can +not use LBR anymore. +4). If host flexible LBR event becomes enabled when guest vLBR event is +active, the guest vLBR event keeps LBR, the guest can still use LBR. +5). If host per cpu pinned LBR event turns off when guest vLBR event is +not active, guest vLBR event can be active and own LBR, the guest can use +LBR. + +4.3. vLBR Arch Gaps +------------------- + +Like vPMU Arch Gap, vLBR event can be preempted by host Per cpu pinned +event at any time, or vLBR event is not active at creation, but guest +can not notice this, so the guest will get meaningless value when the +vLBR event is not active.
Another gap is live-migration support for guest LBR.