On 02/07/2012 10:18 AM, Jan Kiszka wrote:
On 2012-02-07 17:02, Avi Kivity wrote:
On 02/07/2012 05:17 PM, Anthony Liguori wrote:
On 02/07/2012 06:03 AM, Avi Kivity wrote:
On 02/06/2012 09:11 PM, Anthony Liguori wrote:
I'm not so sure. ioeventfds and a future mmio-over-socketpair have
to put the
kthread to sleep while it waits for the other end to process it.
This is
effectively equivalent to a heavy weight exit. The difference in
cost is
dropping to userspace which is really neglible these days (< 100
cycles).
On what machine did you measure these wonderful numbers?
A syscall is what I mean by "dropping to userspace", not the cost of a
heavy weight exit.
Ah. But then ioeventfd has that as well, unless the other end is in the
kernel too.
I think a heavy weight exit is still around a few thousand cycles.
Any nehalem class or better processor should have a syscall cost of
around that unless I'm wildly mistaken.
That's what I remember too.
But I agree a heavyweight exit is probably faster than a double
context switch
on a remote core.
I meant, if you already need to take a heavyweight exit (and you do to
schedule something else on the core), than the only additional cost is
taking a syscall return to userspace *first* before scheduling another
process. That overhead is pretty low.
Yeah.
Isn't there another level in between just scheduling and full syscall
return if the user return notifier has some real work to do?
Depends on whether you're scheduling a kthread or a userspace process, no? If
you're eventually going to end up in userspace, you have to do the full heavy
weight exit.
If you're scheduling to a kthread, it's better to do the type of trickery that
ioeventfd does and just turn it into a function call.
Regards,
Anthony Liguori
Jan
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html