Re: [PATCH v4 4/4] vhost: add an RPMsg API

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Tue, Aug 04, 2020 at 05:19:17PM +0200, Guennadi Liakhovetski wrote:
> On Tue, Aug 04, 2020 at 10:27:08AM -0400, Michael S. Tsirkin wrote:
> > On Wed, Jul 22, 2020 at 05:09:27PM +0200, Guennadi Liakhovetski wrote:
> > > Linux supports running the RPMsg protocol over the VirtIO transport
> > > protocol, but currently there is only support for VirtIO clients and
> > > no support for a VirtIO server. This patch adds a vhost-based RPMsg
> > > server implementation.
> > > 
> > > Signed-off-by: Guennadi Liakhovetski <guennadi.liakhovetski@xxxxxxxxxxxxxxx>
> > > ---
> > >  drivers/vhost/Kconfig       |   7 +
> > >  drivers/vhost/Makefile      |   3 +
> > >  drivers/vhost/rpmsg.c       | 375 ++++++++++++++++++++++++++++++++++++
> > >  drivers/vhost/vhost_rpmsg.h |  74 +++++++
> > >  4 files changed, 459 insertions(+)
> > >  create mode 100644 drivers/vhost/rpmsg.c
> > >  create mode 100644 drivers/vhost/vhost_rpmsg.h
> > > 
> > > diff --git a/drivers/vhost/Kconfig b/drivers/vhost/Kconfig
> > > index d3688c6afb87..602421bf1d03 100644
> > > --- a/drivers/vhost/Kconfig
> > > +++ b/drivers/vhost/Kconfig
> > > @@ -38,6 +38,13 @@ config VHOST_NET
> > >  	  To compile this driver as a module, choose M here: the module will
> > >  	  be called vhost_net.
> > >  
> > > +config VHOST_RPMSG
> > > +	tristate
> > 
> > So this lacks a description line so it does not appear
> > in menuconfig. How is user supposed to set it?
> > I added a one-line description.
> 
> That was on purpose. I don't think there's any value in this API stand-alone, 
> so I let users select it as needed. But we can change that too, id desired.

I guess the patches actually selecting this 
are separate then?

> > > +	depends on VHOST
> > 
> > Other drivers select VHOST instead. Any reason not to
> > do it like this here?
> 
> I have
> 
> +	select VHOST
> +	select VHOST_RPMSG
> 
> in my client driver patch.

Any issues selecting from here so others get it for free?
If this is selected then dependencies are ignored ...

> > > +	help
> > > +	  Vhost RPMsg API allows vhost drivers to communicate with VirtIO
> > > +	  drivers, using the RPMsg over VirtIO protocol.
> > > +
> > 
> > >  config VHOST_SCSI
> > >  	tristate "VHOST_SCSI TCM fabric driver"
> > >  	depends on TARGET_CORE && EVENTFD
> > > diff --git a/drivers/vhost/Makefile b/drivers/vhost/Makefile
> > > index f3e1897cce85..9cf459d59f97 100644
> > > --- a/drivers/vhost/Makefile
> > > +++ b/drivers/vhost/Makefile
> > > @@ -2,6 +2,9 @@
> > >  obj-$(CONFIG_VHOST_NET) += vhost_net.o
> > >  vhost_net-y := net.o
> > >  
> > > +obj-$(CONFIG_VHOST_RPMSG) += vhost_rpmsg.o
> > > +vhost_rpmsg-y := rpmsg.o
> > > +
> > >  obj-$(CONFIG_VHOST_SCSI) += vhost_scsi.o
> > >  vhost_scsi-y := scsi.o
> > >  
> > > diff --git a/drivers/vhost/rpmsg.c b/drivers/vhost/rpmsg.c
> > > new file mode 100644
> > > index 000000000000..d7ab48414224
> > > --- /dev/null
> > > +++ b/drivers/vhost/rpmsg.c
> > > @@ -0,0 +1,375 @@
> > > +// SPDX-License-Identifier: GPL-2.0-only
> > > +/*
> > > + * Copyright(c) 2020 Intel Corporation. All rights reserved.
> > > + *
> > > + * Author: Guennadi Liakhovetski <guennadi.liakhovetski@xxxxxxxxxxxxxxx>
> > > + *
> > > + * Vhost RPMsg VirtIO interface. It provides a set of functions to match the
> > > + * guest side RPMsg VirtIO API, provided by drivers/rpmsg/virtio_rpmsg_bus.c
> > > + * These functions handle creation of 2 virtual queues, handling of endpoint
> > > + * addresses, sending a name-space announcement to the guest as well as any
> > > + * user messages. This API can be used by any vhost driver to handle RPMsg
> > > + * specific processing.
> > > + * Specific vhost drivers, using this API will use their own VirtIO device
> > > + * IDs, that should then also be added to the ID table in virtio_rpmsg_bus.c
> > > + */
> > > +
> > > +#include <linux/compat.h>
> > > +#include <linux/file.h>
> > > +#include <linux/miscdevice.h>
> > > +#include <linux/module.h>
> > > +#include <linux/mutex.h>
> > > +#include <linux/vhost.h>
> > > +#include <linux/virtio_rpmsg.h>
> > > +#include <uapi/linux/rpmsg.h>
> > > +
> > > +#include "vhost.h"
> > > +#include "vhost_rpmsg.h"
> > > +
> > > +/*
> > > + * All virtio-rpmsg virtual queue kicks always come with just one buffer -
> > > + * either input or output
> > > + */
> > > +static int vhost_rpmsg_get_single(struct vhost_virtqueue *vq)
> > > +{
> > > +	struct vhost_rpmsg *vr = container_of(vq->dev, struct vhost_rpmsg, dev);
> > > +	unsigned int out, in;
> > > +	int head = vhost_get_vq_desc(vq, vq->iov, ARRAY_SIZE(vq->iov), &out, &in,
> > > +				     NULL, NULL);
> > > +	if (head < 0) {
> > > +		vq_err(vq, "%s(): error %d getting buffer\n",
> > > +		       __func__, head);
> > > +		return head;
> > > +	}
> > > +
> > > +	/* Nothing new? */
> > > +	if (head == vq->num)
> > > +		return head;
> > > +
> > > +	if (vq == &vr->vq[VIRTIO_RPMSG_RESPONSE] && (out || in != 1)) {
> > 
> > This in != 1 looks like a dependency on a specific message layout.
> > virtio spec says to avoid these. Using iov iters it's not too hard to do
> > ...
> 
> This is an RPMsg VirtIO implementation, and it has to match the virtio_rpmsg_bus.c 
> driver, and that one has specific VirtIO queue and message usage patterns.

That could be fine for legacy virtio, but now you are claiming support
for virtio 1, so need to fix these assumptions in the device.


> > > +		vq_err(vq,
> > > +		       "%s(): invalid %d input and %d output in response queue\n",
> > > +		       __func__, in, out);
> > > +		goto return_buf;
> > > +	}
> > > +
> > > +	if (vq == &vr->vq[VIRTIO_RPMSG_REQUEST] && (in || out != 1)) {
> > > +		vq_err(vq,
> > > +		       "%s(): invalid %d input and %d output in request queue\n",
> > > +		       __func__, in, out);
> > > +		goto return_buf;
> > > +	}
> > > +
> > > +	return head;
> > > +
> > > +return_buf:
> > > +	/*
> > > +	 * FIXME: might need to return the buffer using vhost_add_used()
> > > +	 * or vhost_discard_vq_desc(). vhost_discard_vq_desc() is
> > > +	 * described as "being useful for error handling," but it makes
> > > +	 * the thus discarded buffers "unseen," so next time we look we
> > > +	 * retrieve them again?
> > 
> > 
> > Yes. It's your decision what to do on error. if you also signal
> > an eventfd using vq_err, then discarding will
> > make it so userspace can poke at ring and hopefully fix it ...
> 
> I assume the user-space in this case is QEMU. Would it be the safest to use 
> vhost_add_used() then?

Your call.

> > > +	 */
> > > +	return -EINVAL;
> > > +}
> 
> [snip]
> 
> > > +	return 0;
> > > +
> > > +return_buf:
> > > +	/*
> > > +	 * FIXME: vhost_discard_vq_desc() or vhost_add_used(), see comment in
> > > +	 * vhost_rpmsg_get_single()
> > > +	 */
> > 
> > What's to be done with this FIXME?
> 
> This is the same question as above - I just wasn't sure which error handling 
> was appropriate here, don't think many vhost drivers do any od this...
> 
> Thanks
> Guennadi

_______________________________________________
Virtualization mailing list
Virtualization@xxxxxxxxxxxxxxxxxxxxxxxxxx
https://lists.linuxfoundation.org/mailman/listinfo/virtualization



[Index of Archives]     [KVM Development]     [Libvirt Development]     [Libvirt Users]     [CentOS Virtualization]     [Netdev]     [Ethernet Bridging]     [Linux Wireless]     [Kernel Newbies]     [Security]     [Linux for Hams]     [Netfilter]     [Bugtraq]     [Yosemite Forum]     [MIPS Linux]     [ARM Linux]     [Linux RAID]     [Linux Admin]     [Samba]

  Powered by Linux