Re: [Qemu-devel] [PATCH] qemu-kvm: introduce cpu_start/cpu_stop commands

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 11/23/2010 04:24 PM, Anthony Liguori wrote:


Using monitor commands is fairly heavyweight for something as high frequency as this. What control period do you see people using? Maybe we should define USR1 for vcpu start/stop.

What happens if one vcpu is stopped while another is running? Spin loops, synchronous IPIs will take forever. Maybe we need to stop the entire process.

It's the same problem if a VCPU is descheduled while another is running.

We can fix that with directed yield or lock holder preemption prevention. But if a vcpu is stopped by qemu, we suddenly can't.

That only works for spin locks.

Here's the scenario:

1) VCPU 0 drops to userspace and acquires qemu_mutex
2) VCPU 0 gets descheduled
3) VCPU 1 needs to drop to userspace and acquire qemu_mutex, gets blocked and yields 4) If we're lucky, VCPU 0 gets scheduled but it depends on how busy the system is

With CFS hard limits, once (2) happens, we're boned for (3) because (4) cannot happen. By having QEMU know about (2), it can choose to run just a little bit longer in order to drop qemu_mutex such that (3) never happens.

There's some support for futex priority inheritance, perhaps we can leverage that. It's supposed to be for realtime threads, but perhaps we can hook the priority booster to directed yield.

It's really the same problem -- preempted lock holder -- only in userspace. We should be able to use the same solution.



The problem with stopping the entire process is that a big motivation for this is to ensure that benchmarks have consistent results regardless of CPU capacity. If you just monitor the full process, then one VCPU may dominate the entitlement resulting in very erratic benchmarking.

What's the desired behaviour? Give each vcpu 300M cycles per second, or give a 2vcpu guest 600M cycles per second?

Each vcpu gets 300M cycles per second.

You could monitor threads separately but stop the entire process. Stopping individual threads will break apart as soon as they start taking locks.

I don't think so.. PLE should work as expected. It's no different than a normally contended system.


PLE without directed yield is useless. With directed yield, it may work, but if the vcpu is stopped, it becomes ineffective.

Directed yield allows the scheduler to follow a bouncing lock around by increasing the priority (or decreasing vruntime) of the immediate lock holder at the expense of waiters. SIGSTOP may drop the priority of the lock holder to zero without giving PLE a way to adjust.

--
error compiling committee.c: too many arguments to function

--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]
  Powered by Linux