> -----Original Messages----- > From: "Wodkowski, PawelX" <pawelx.wodkowski@xxxxxxxxx> > Sent Time: 2019-01-10 00:19:48 (Thursday) > To: "stefanha@xxxxxxxxx" <stefanha@xxxxxxxxx>, "spdk@xxxxxxxxxxxx" <spdk@xxxxxxxxxxxx> > Cc: "libvir-list@xxxxxxxxxx" <libvir-list@xxxxxxxxxx>, "xieyongji@xxxxxxxxx" <xieyongji@xxxxxxxxx>, "qemu-devel@xxxxxxxxxx" <qemu-devel@xxxxxxxxxx>, "lilin24@xxxxxxxxx" <lilin24@xxxxxxxxx> > Subject: Re: [SPDK] [Qemu-devel] Qemu migration with vhost-user-blk on top of local storage > > On Wed, 2019-01-09 at 21:23 +0800, wuzhouhui wrote: > > > -----Original Messages----- > > > From: "Stefan Hajnoczi" <stefanha@xxxxxxxxx> > > > Sent Time: 2019-01-09 20:42:58 (Wednesday) > > > To: wuzhouhui <wuzhouhui14@xxxxxxxxxxxxxxxx> > > > Cc: qemu-devel@xxxxxxxxxx, xieyongji@xxxxxxxxx, lilin24@xxxxxxxxx, > > > libvir-list@xxxxxxxxxx, spdk@xxxxxxxxxxxx > > > Subject: Re: [Qemu-devel] Qemu migration with vhost-user-blk on top > > > of local storage > > > > > > On Wed, Jan 09, 2019 at 06:23:42PM +0800, wuzhouhui wrote: > > > > Hi everyone, > > > > > > > > I'm working qemu with vhost target (e.g. spdk), and I attempt to > > > > migrate VM with > > > > 2 local storages. One local storage is a regular file, e.g. > > > > /tmp/c74.qcow2, and > > > > the other is a malloc bdev that spdk created. This malloc bdev > > > > will exported to > > > > VM via vhost-user-blk. When I execute following command: > > > > > > > > virsh migrate --live --persistent --unsafe --undefinesource -- > > > > copy-storage-all \ > > > > --p2p --auto-converge --verbose --desturi > > > > qemu+tcp://<uri>/system vm0 > > > > > > > > The libvirt reports: > > > > > > > > qemu-2.12.1: error: internal error: unable to execute QEMU > > > > command \ > > > > 'nbd-server-add': Cannot find device=drive-virtio-disk1 nor \ > > > > node_name=drive-virtio-disk1 > > > > > > Please post your libvirt domain XML. > > > > My libvirt is based on libvirt-1.1.1-29.el7, and add many patches to > > satisfy our > > own needs, e.g. add support for vhost-user-blk. Post domain xml may > > not useful. > > Anyway, following is full contents of XML: > > > > <domain type='kvm'> > > <name>wzh</name> > > <uuid>a84e96e6-2c53-408d-986b-c709bc6a0e51</uuid> > > <memory unit='MiB'>4096</memory> > > <memoryBacking> > > <hugepages/> > > </memoryBacking> > > <currentMemory unit='MiB'>4096</currentMemory> > > <vcpu placement='static' cpuset='16-31'>2</vcpu> > > <os> > > <type arch='x86_64' machine='pc'>hvm</type> > > <boot dev='hd'/> > > </os> > > <features> > > <acpi/> > > </features> > > <clock offset='utc'> > > <timer name='rtc' tickpolicy='catchup'/> > > </clock> > > <on_poweroff>destroy</on_poweroff> > > <on_reboot>restart</on_reboot> > > <on_crash>destroy</on_crash> > > <devices> > > <emulator>/data/wzh/x86_64-softmmu/qemu-system- > > x86_64</emulator> > > <disk type='file' device='disk'> > > <driver name='qemu' type='qcow2' cache='none'/> > > <source file='/data/wzh/c74.qcow2'/> > > <target dev='vda' bus='virtio'/> > > <alias name='virtio-disk0'/> > > </disk> > > > > <disk type='vhost-user-blk' device='disk'> > > <source type='unix' path='/var/tmp/lv0' mode='client'> > > </source> > > <target dev='vdb' bus='virtio'/> > > <driver queues='4'/> > > </disk> > > > > <controller type='usb' index='0'> > > <alias name='usb0'/> > > </controller> > > <controller type='pci' index='0' model='pci-root'> > > <alias name='pci.0'/> > > </controller> > > <serial type='pty'> > > <target port='0'/> > > <alias name='serial0'/> > > </serial> > > <serial type='pty'> > > <target port='1'/> > > <alias name='serial1'/> > > </serial> > > <input type='tablet' bus='usb'> > > <alias name='input0'/> > > </input> > > <input type='mouse' bus='ps2'/> > > <graphics type='vnc' autoport='yes' listen='0.0.0.0' > > keymap='en-us'> > > <listen type='address' address='0.0.0.0'/> > > </graphics> > > <video> > > <model type='cirrus' vram='9216' heads='1'/> > > <alias name='video0'/> > > </video> > > </devices> > > <seclabel type='none'/> > > </domain> > > > > > > > > > Does it means that qemu with spdk on top of local storage don't > > > > support migration? > > > > > > > > QEMU: 2.12.1 > > > > SPDK: 18.10 > > > > > > vhost-user-blk bypasses the QEMU block layer, so NBD storage > > > migration > > > at the QEMU level will not work for the vhost-user-blk disk. > > > > > > Stefan > > > Don't know if this is the case that wuzhouhui is using but generally My case is different from yours. All storages (including qemu image and malloc bdev) are local, and only accessible from one of hosts. Anyway, thanks for everyone's response. wuzhouhui > migration should work if all storages are accesible from both machines. > The QEMU image and Malloc bdev should be exposed using some kind of > network sharing. SPDK on migration target should be configured the same > way as on source machine. > > We are testing this issuing QEMU monitor commands so I can't help with > libvirt here. > > Example setup/test case we are using in one of our tests: > > Machine1: > Expose VM image over SSHF > Expose some block device using SPDK nvmf target > SPDK nvmf initiator: > - connects to Machine1 SPDK nvmf target to access block device > SPDK vhost-scsi: > - presents block device to QEMU from SPDK nvmf initiator: > QEMU VM uses > - shared VM image > - block device from SPDK nvmf initiator over vhost-user-scsi > > Machine2: > SPDK nvmf initiator: > - connects to Machine1 SPDK nvmf target to access block device > SPDK vhost-scsi: > - presents block device to QEMU from nvmf > initiator: > QEMU instance - waiting for incoming migration > - shared VM image over SSHFS > - block device from SPDK nvmf initiator over vhost-user-scsi > > Some traffic is generated using FIO and then we just migrate VM from > Machine1 to Machine2. > > Pawel > > > > > _______________________________________________ > > SPDK mailing list > > SPDK@xxxxxxxxxxxx > > https://lists.01.org/mailman/listinfo/spdk > _______________________________________________ > SPDK mailing list > SPDK@xxxxxxxxxxxx > https://lists.01.org/mailman/listinfo/spdk -- libvir-list mailing list libvir-list@xxxxxxxxxx https://www.redhat.com/mailman/listinfo/libvir-list