Re: [PATCH RFC v5 4/8] pvqspinlock, x86: Allow unfair spinlock in a real PV environment

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Il 04/03/2014 16:15, Waiman Long ha scritto:

PLE is unnecessary if you have "true" pv spinlocks where the
next-in-line schedules itself out with a hypercall (Xen) or hlt
instruction (KVM).  Set a bit in the qspinlock before going to sleep,
and the lock owner will know that it needs to kick the next-in-line.

I think there is no need for the unfair lock bits.  1-2% is a pretty
large hit.

I don't think that PLE is something that can be controlled by software.
It is done in hardware.

Yes, but the hypervisor decides *what* to do when the processor detects a pause-loop.

But my point is that if you have pv spinlocks, the processor in the end will never or almost never do a pause-loop exit. PLE is mostly for legacy guests that doesn't have pv spinlocks.

Paolo

I maybe wrong. Anyway, I plan to add code to
schedule out the CPUs waiting in the queue except the first 2 in the
next version of the patch.

The PV code in the v5 patch did seem to improve benchmark performance
with moderate to heavy spinlock contention. However, I didn't see much
CPU kicking going on. My theory is that the additional PV code
complicates the pause loop timing so that the hardware PLE didn't kick
in, whereas the original pause loop is pretty simple causing PLE to
happen fairly frequently.

_______________________________________________
Virtualization mailing list
Virtualization@xxxxxxxxxxxxxxxxxxxxxxxxxx
https://lists.linuxfoundation.org/mailman/listinfo/virtualization




[Index of Archives]     [KVM Development]     [Libvirt Development]     [Libvirt Users]     [CentOS Virtualization]     [Netdev]     [Ethernet Bridging]     [Linux Wireless]     [Kernel Newbies]     [Security]     [Linux for Hams]     [Netfilter]     [Bugtraq]     [Yosemite Forum]     [MIPS Linux]     [ARM Linux]     [Linux RAID]     [Linux Admin]     [Samba]

  Powered by Linux