Re: [PATCH] ioeventfd: Introduce KVM_IOEVENTFD_FLAG_PIPE

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 07/04/2011 02:07 PM, Michael S. Tsirkin wrote:
On Mon, Jul 04, 2011 at 01:45:07PM +0300, Avi Kivity wrote:
>  On 07/04/2011 01:32 PM, Michael S. Tsirkin wrote:
>  >On Sun, Jul 03, 2011 at 08:04:49PM +0300, Sasha Levin wrote:
>  >>   The new flag allows passing a write side of a pipe instead of an
>  >>   eventfd to be notified of writes to the specified memory region.
>  >>
>  >>   Instead of signaling an event, the value written to the memory region
>  >>   is written to the pipe.
>  >>
>  >>   Using a pipe instead of an eventfd is usefull when any value can be
>  >>   written to the memory region but we're interested in recieving the
>  >>   actual value instead of just a notification.
>  >>
>  >>   A simple example for practical use is the serial port. we are not
>  >>   interested in an exit every time a char is written to the port, but
>  >>   we do need to know what was written so we could handle it on the guest.
>  >
>  >Looking at this example, how would you handle a pipe full condition?
>  >We can't buffer unlimited amount of data in the host.
>
>  Stall.

Right, but the guest gets no indication that the pipe is full.
Something like virtio would let the guest do something useful
instead of stalling the vcpu.

That's not a problem. The vcpu blocks, which lets the other process get the cpu and run with it. If there are not enough cpu resources, we'll indeed stall the vcpu, but that happens whenever you're overcommitted anyway.

Also noting that the fd can be set not to block, or that
a signal can interrupt the write. Both cases are not errors.

One thing we can do is return via the normal KVM_EXIT_MMIO method and hope userspace knows how to handle this. Otherwise I don't see what we can do.

>  >
>  >If pipe is non-blocking, or if we get a signal,
>  >this might fail or return a value<   len.
>  >Data will be lost then, won't it?
>
>  Yes.  Need a loop-until-buffer-exhausted-or-error.

Signal handling becomes a problem. You don't want a
full pipe to prevent qemu from getting killed or
getting a timer alert.

Maybe we should require AF_UNIX SOCK_SEQPACKET connection. That gives us atomicity, and drops the need for a mutex.

>
>  We should allow unix domain sockets as well.  In fact, for
>  read/write support, we need this to be a unix domain socket.

Sockets are actually better at this than pipes
as you can at least make the writes
non-blocking by passing in a message flag.

I'm not sure we want that.  How do we handle it?

If the socket buffers get filled up, it's time for the vcpu to wait for the mmio server process. Let the scheduler sort things out.

btw, like vhost-net and other thread offloads, this sort of trick is dangerous. When you have excess cpu resources throughput improves, but once the system is loaded, the workload is needlessly spread across more cores than strictly necessary and communication is done by context switches instead of user/system transitions.

If we support sockets, do we really need to support
pipes at all

I think not.

--
error compiling committee.c: too many arguments to function

--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]
  Powered by Linux