Re: [PATCH] use unfair spinlock when running on hypervisor.

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 06/01/2010 08:27 PM, Andi Kleen wrote:
On Tue, Jun 01, 2010 at 07:52:28PM +0300, Avi Kivity wrote:
We are running everything on NUMA (since all modern machines are now NUMA).
  At what scale do the issues become observable?
On Intel platforms it's visible starting with 4 sockets.

Can you recommend a benchmark that shows bad behaviour? I'll run it with ticket spinlocks and Gleb's patch. I have a 4-way Nehalem-EX, presumably the huge number of threads will magnify the problem even more there.

I understand that reason and do not propose to get back to old spinlock
on physical HW! But with virtualization performance hit is unbearable.

Extreme unfairness can be unbearable too.

Well, the question is what happens first.  In our experience, vcpu
overcommit is a lot more painful.  People will never see the NUMA
unfairness issue if they can't use kvm due to the vcpu overcommit problem.
You really have to address both, if you don't fix them both
users will eventually into one of them and be unhappy.

That's definitely the long term plan. I consider Gleb's patch the first step.

Do  you have any idea how we can tackle both problems?

--
I have a truly marvellous patch that fixes the bug which this
signature is too narrow to contain.

--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]
  Powered by Linux