Re: [Qemu-devel] [virtio-comment] [PATCH] *** Vhost-pci RFC v2 ***

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Fri, Sep 02, 2016 at 12:27:25AM +0800, Wei Wang wrote:
> On 09/01/2016 12:07 AM, Stefan Hajnoczi wrote:
> > On Tue, Aug 30, 2016 at 10:08:01AM +0000, Wang, Wei W wrote:
> > > On Monday, August 29, 2016 11:25 PM, Stefan Hajnoczi wrote:
> > > > To: Wang, Wei W <wei.w.wang@xxxxxxxxx>
> > > > Cc: kvm@xxxxxxxxxxxxxxx; qemu-devel@xxxxxxxxxx; virtio-
> > > > comment@xxxxxxxxxxxxxxxxxxxx; mst@xxxxxxxxxx; pbonzini@xxxxxxxxxx
> > > > Subject: Re: [virtio-comment] [PATCH] *** Vhost-pci RFC v2 ***
> > > > 
> > > > On Mon, Jun 27, 2016 at 02:01:24AM +0000, Wang, Wei W wrote:
> > > > > On Sun 6/19/2016 10:14 PM, Wei Wang wrote:
> > > > > > This RFC proposes a design of vhost-pci, which is a new virtio device type.
> > > > > > The vhost-pci device is used for inter-VM communication.
> > > > > > 
> > > > > > Changes in v2:
> > > > > > 1. changed the vhost-pci driver to use a controlq to send acknowledgement
> > > > > >     messages to the vhost-pci server rather than writing to the device
> > > > > >     configuration space;
> > > > > > 
> > > > > > 2. re-organized all the data structures and the description
> > > > > > layout;
> > > > > > 
> > > > > > 3. removed the VHOST_PCI_CONTROLQ_UPDATE_DONE socket message,
> > > > which
> > > > > > is redundant;
> > > > > > 
> > > > > > 4. added a message sequence number to the msg info structure to
> > > > > > identify socket
> > > > > >     messages, and the socket message exchange does not need to be
> > > > > > blocking;
> > > > > > 
> > > > > > 5. changed to used uuid to identify each VM rather than using the
> > > > > > QEMU
> > > > process
> > > > > >     id
> > > > > > 
> > > > > One more point should be added is that the server needs to send
> > > > > periodic socket messages to check if the driver VM is still alive. I
> > > > > will add this message support in next version.  (*v2-AR1*)
> > > > Either the driver VM could go down or the device VM (server) could go
> > > > down.  In both cases there must be a way to handle the situation.
> > > > 
> > > > If the server VM goes down it should be possible for the driver VM to
> > > > resume either via hotplug of a new device or through messages
> > > > reinitializing the dead device when the server VM restarts.
> > > I got feedbacks from people that the name of device VM and driver VM are difficult to remember. Can we use client (or frontend) VM and server (or backend) VM in the discussion? I think that would sound more straightforward :)
> > We discussed this in a previous email thread.
> > 
> > Device and driver are the terms used by the virtio spec.  Anyone dealing
> > with vhost-pci design must be familiar with the virtio spec.
> > 
> > I don't see how using the terminology consistently can be confusing,
> > unless these people haven't looked at the virtio spec.  In that case
> > they have no business with working on vhost-pci because virtio is a
> > prerequisite :).
> > 
> > Stefan
> I don't disagree :)
> But "frontend/backend" is also commonly used in descriptions in virtio
> related stuff, and it seems that more people like it. It's also easier to
> describe some components in the design (e.g. a backend functionality like
> vhost-pci-net). I am not sure if you guys are also OK with it.

If you want to use frontend/backend I don't mind.  It seems clear to me.

Stefan

Attachment: signature.asc
Description: PGP signature


[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]
  Powered by Linux