Re: [RFC] Para-virtualized TLB flush for PV-waiting vCPUs

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Tue, Jan 07, 2025 at 12:56:52AM +0900, Kenta Ishiguro wrote:
>In oversubscribed environments, the latency of flushing the remote TLB can
>become significant when the destination virtual CPU (vCPU) is the waiter
>of a para-virtualized queued spinlock that halts with interrupts disabled.
>This occurs because the waiter does not respond to remote function call
>requests until it releases the spinlock. As a result, the source vCPU
>wastes CPU time performing busy-waiting for a response from the
>destination vCPU.
>
>To mitigate this issue, this patch extends the target of the PV TLB flush
>to include vCPUs that are halting to wait on the PV qspinlock. Since the
>PV qspinlock waiters voluntarily yield before being preempted by KVM,
>their state does not get preempted, and the current PV TLB flush overlooks
>them. This change allows vCPUs to bypass waiting for PV qspinlock waiters
>during TLB shootdowns.

This doesn't seem to be a KVM-specific problem; other hypervisors should
have the same problem. So, I think we can implement a more generic solution
w/o involving the hypervisor. e.g., the guest can track which vCPUs are
waiting on PV qspinlock, delay TLB flush on them and have those vCPUs
perform TLB flush after they complete their wait (e.g., right after the
halt() in kvm_wait()).




[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]

  Powered by Linux