On 02/07/2012 10:02 AM, Avi Kivity wrote:
On 02/07/2012 05:17 PM, Anthony Liguori wrote:
On 02/07/2012 06:03 AM, Avi Kivity wrote:
On 02/06/2012 09:11 PM, Anthony Liguori wrote:
I'm not so sure. ioeventfds and a future mmio-over-socketpair have to put the
kthread to sleep while it waits for the other end to process it. This is
effectively equivalent to a heavy weight exit. The difference in cost is
dropping to userspace which is really neglible these days (< 100 cycles).
On what machine did you measure these wonderful numbers?
A syscall is what I mean by "dropping to userspace", not the cost of a heavy
weight exit.
Ah. But then ioeventfd has that as well, unless the other end is in the kernel too.
Yes, that was my point exactly :-)
ioeventfd/mmio-over-socketpair to adifferent thread is not faster than a
synchronous KVM_RUN + writing to an eventfd in userspace modulo a couple of
cheap syscalls.
The exception is when the other end is in the kernel and there is magic
optimizations (like there is today with ioeventfd).
Regards,
Anthony Liguori
I think a heavy weight exit is still around a few thousand cycles.
Any nehalem class or better processor should have a syscall cost of around
that unless I'm wildly mistaken.
That's what I remember too.
But I agree a heavyweight exit is probably faster than a double context switch
on a remote core.
I meant, if you already need to take a heavyweight exit (and you do to
schedule something else on the core), than the only additional cost is taking
a syscall return to userspace *first* before scheduling another process. That
overhead is pretty low.
Yeah.
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html