On Wed, Oct 11, 2017 at 02:03:20PM +0800, Wei Wang wrote: > On 10/10/2017 11:15 PM, Michael S. Tsirkin wrote: > > On Mon, Oct 02, 2017 at 04:38:01PM +0000, Wang, Wei W wrote: > > > On Sunday, October 1, 2017 11:19 AM, Michael S. Tsirkin wrote: > > > > On Sat, Sep 30, 2017 at 12:05:54PM +0800, Wei Wang wrote: > > > > > +static void ctrlq_send_cmd(struct virtio_balloon *vb, > > > > > + struct virtio_balloon_ctrlq_cmd *cmd, > > > > > + bool inbuf) > > > > > +{ > > > > > + struct virtqueue *vq = vb->ctrl_vq; > > > > > + > > > > > + ctrlq_add_cmd(vq, cmd, inbuf); > > > > > + if (!inbuf) { > > > > > + /* > > > > > + * All the input cmd buffers are replenished here. > > > > > + * This is necessary because the input cmd buffers are lost > > > > > + * after live migration. The device needs to rewind all of > > > > > + * them from the ctrl_vq. > > > > Confused. Live migration somehow loses state? Why is that and why is it a good > > > > idea? And how do you know this is migration even? > > > > Looks like all you know is you got free page end. Could be any reason for this. > > > > > > I think this would be something that the current live migration lacks - what the > > > device read from the vq is not transferred during live migration, an example is the > > > stat_vq_elem: > > > Line 476 at https://github.com/qemu/qemu/blob/master/hw/virtio/virtio-balloon.c > > This does not touch guest memory though it just manipulates > > internal state to make it easier to migrate. > > It's transparent to guest as migration should be. > > > > > For all the things that are added to the vq and need to be held by the device > > > to use later need to consider the situation that live migration might happen at any > > > time and they need to be re-taken from the vq by the device on the destination > > > machine. > > > > > > So, even without this live migration optimization feature, I think all the things that are > > > added to the vq for the device to hold, need a way for the device to rewind back from > > > the vq - re-adding all the elements to the vq is a trick to keep a record of all of them > > > on the vq so that the device side rewinding can work. > > > > > > Please let me know if anything is missed or if you have other suggestions. > > IMO migration should pass enough data source to destination for > > destination to continue where source left off without guest help. > > > > I'm afraid it would be difficult to pass the entire VirtQueueElement to the > destination. I think > that would also be the reason that stats_vq_elem chose to rewind from the > guest vq, which re-do the > virtqueue_pop() --> virtqueue_map_desc() steps (the QEMU virtual address to > the guest physical > address relationship may be changed on the destination). Yes but note how that rewind does not involve modifying the ring. It just rolls back some indices. > > How about another direction which would be easier - using two 32-bit device > specific configuration registers, > Host2Guest and Guest2Host command registers, to replace the ctrlq for > command exchange: > > The flow can be as follows: > > 1) Before Host sending a StartCMD, it flushes the free_page_vq in case any > old free page hint is left there; > 2) Host writes StartCMD to the Host2Guest register, and notifies the guest; > > 3) Upon receiving a configuration notification, Guest reads the Host2Guest > register, and detaches all the used buffers from free_page_vq; > (then for each StartCMD, the free_page_vq will always have no obsolete free > page hints, right? ) > > 4) Guest start report free pages: > 4.1) Host may actively write StopCMD to the Host2Guest register before > the guest finishes; or > 4.2) Guest finishes reporting, write StopCMD the Guest2HOST register, > which traps to QEMU, to stop. > > > Best, > Wei I am not sure it matters whether a VQ or the config are used to start/stop. But I think flushing is very fragile. You will easily run into races if one of the actors gets out of sync and keeps adding data. I think adding an ID in the free vq stream is a more robust approach. _______________________________________________ Virtualization mailing list Virtualization@xxxxxxxxxxxxxxxxxxxxxxxxxx https://lists.linuxfoundation.org/mailman/listinfo/virtualization