On 09/25/2012 09:59 AM, Anthony Liguori wrote:
Lucas Meneghel Rodrigues <lmr@xxxxxxxxxx> writes:
Hi guys,
We're seeing the following problem during upstream testing:
qemu: VQ 0 size 0x80 Guest index 0x2d6
inconsistent with Host index 0x18: delta 0x2be
qemu: warning: error while loading state for
instance 0x0 of device '0000:00:04.0/virtio-blk'
load of migration failed
This is happening consistently with qemu and qemu-kvm. Test case is
simple, while the vm goes through a reboot loop, a parallel ping-pong
migration loop happens.
I'm happy to provide more details and logs.
Can you provide the full command line?
Sure. The problem happens with *all* migration protocols, let's use as
an example the protocol fd. The vm is started with:
09/22 07:26:19 INFO | kvm_vm:1605| /usr/local/autotest/tests/kvm/qemu
09/22 07:26:19 INFO | kvm_vm:1605| -S
09/22 07:26:19 INFO | kvm_vm:1605| -name 'vm1'
09/22 07:26:19 INFO | kvm_vm:1605| -nodefaults
09/22 07:26:19 INFO | kvm_vm:1605| -chardev
socket,id=hmp_id_humanmonitor1,path=/tmp/monitor-humanmonitor1-20120922-072139-HDnHgnLh,server,nowait
09/22 07:26:19 INFO | kvm_vm:1605| -mon
chardev=hmp_id_humanmonitor1,mode=readline
09/22 07:26:19 INFO | kvm_vm:1605| -chardev
socket,id=qmp_id_qmpmonitor1,path=/tmp/monitor-qmpmonitor1-20120922-072139-HDnHgnLh,server,nowait
09/22 07:26:19 INFO | kvm_vm:1605| -mon
chardev=qmp_id_qmpmonitor1,mode=control
09/22 07:26:19 INFO | kvm_vm:1605| -chardev
socket,id=serial_id_20120922-072139-HDnHgnLh,path=/tmp/serial-20120922-072139-HDnHgnLh,server,nowait
09/22 07:26:19 INFO | kvm_vm:1605| -device
isa-serial,chardev=serial_id_20120922-072139-HDnHgnLh
09/22 07:26:19 INFO | kvm_vm:1605| -chardev
socket,id=seabioslog_id_20120922-072139-HDnHgnLh,path=/tmp/seabios-20120922-072139-HDnHgnLh,server,nowait
09/22 07:26:19 INFO | kvm_vm:1605| -device
isa-debugcon,chardev=seabioslog_id_20120922-072139-HDnHgnLh,iobase=0x402
09/22 07:26:19 INFO | kvm_vm:1605| -device ich9-usb-uhci1,id=usb1
09/22 07:26:19 INFO | kvm_vm:1605| -drive
file='/tmp/kvm_autotest_root/images/rhel62-64.qcow2',if=none,cache=none,id=virtio0
09/22 07:26:19 INFO | kvm_vm:1605| -device
virtio-blk-pci,drive=virtio0
09/22 07:26:19 INFO | kvm_vm:1605| -device
virtio-net-pci,netdev=idPQlGQt,mac='9a:4b:4c:4d:4e:4f',id='id01mORp'
09/22 07:26:19 INFO | kvm_vm:1605| -netdev tap,id=idPQlGQt,fd=24
09/22 07:26:19 INFO | kvm_vm:1605| -m 2048
09/22 07:26:19 INFO | kvm_vm:1605| -smp
2,cores=1,threads=1,sockets=2
09/22 07:26:19 INFO | kvm_vm:1605| -device
usb-tablet,id=usb-tablet1,bus=usb1.0,port=1
09/22 07:26:19 INFO | kvm_vm:1605| -vnc :0
09/22 07:26:19 INFO | kvm_vm:1605| -vga std
09/22 07:26:19 INFO | kvm_vm:1605| -rtc
base=utc,clock=host,driftfix=none
09/22 07:26:19 INFO | kvm_vm:1605| -boot order=cdn,once=c,menu=off
09/22 07:26:19 INFO | kvm_vm:1605| -enable-kvm
09/22 07:26:19 INFO | kvm_vm:1605| -enable-kvm
Then the state will be migrated to a new process:
09/22 07:26:48 INFO | kvm_vm:1605| /usr/local/autotest/tests/kvm/qemu
09/22 07:26:48 INFO | kvm_vm:1605| -S
09/22 07:26:48 INFO | kvm_vm:1605| -name 'vm1'
09/22 07:26:48 INFO | kvm_vm:1605| -nodefaults
09/22 07:26:48 INFO | kvm_vm:1605| -chardev
socket,id=hmp_id_humanmonitor1,path=/tmp/monitor-humanmonitor1-20120922-072648-g6gL8thp,server,nowait
09/22 07:26:48 INFO | kvm_vm:1605| -mon
chardev=hmp_id_humanmonitor1,mode=readline
09/22 07:26:48 INFO | kvm_vm:1605| -chardev
socket,id=qmp_id_qmpmonitor1,path=/tmp/monitor-qmpmonitor1-20120922-072648-g6gL8thp,server,nowait
09/22 07:26:48 INFO | kvm_vm:1605| -mon
chardev=qmp_id_qmpmonitor1,mode=control
09/22 07:26:48 INFO | kvm_vm:1605| -chardev
socket,id=serial_id_20120922-072648-g6gL8thp,path=/tmp/serial-20120922-072648-g6gL8thp,server,nowait
09/22 07:26:48 INFO | kvm_vm:1605| -device
isa-serial,chardev=serial_id_20120922-072648-g6gL8thp
09/22 07:26:48 INFO | kvm_vm:1605| -chardev
socket,id=seabioslog_id_20120922-072648-g6gL8thp,path=/tmp/seabios-20120922-072648-g6gL8thp,server,nowait
09/22 07:26:48 INFO | kvm_vm:1605| -device
isa-debugcon,chardev=seabioslog_id_20120922-072648-g6gL8thp,iobase=0x402
09/22 07:26:48 INFO | kvm_vm:1605| -device ich9-usb-uhci1,id=usb1
09/22 07:26:48 INFO | kvm_vm:1605| -drive
file='/tmp/kvm_autotest_root/images/rhel62-64.qcow2',if=none,cache=none,id=virtio0
09/22 07:26:48 INFO | kvm_vm:1605| -device
virtio-blk-pci,drive=virtio0
09/22 07:26:48 INFO | kvm_vm:1605| -device
virtio-net-pci,netdev=idSld9kt,mac='9a:4b:4c:4d:4e:4f',id='idFBA7Vj'
09/22 07:26:48 INFO | kvm_vm:1605| -netdev tap,id=idSld9kt,fd=45
09/22 07:26:48 INFO | kvm_vm:1605| -m 2048
09/22 07:26:48 INFO | kvm_vm:1605| -smp
2,cores=1,threads=1,sockets=2
09/22 07:26:48 INFO | kvm_vm:1605| -device
usb-tablet,id=usb-tablet1,bus=usb1.0,port=1
09/22 07:26:48 INFO | kvm_vm:1605| -vnc :1
09/22 07:26:48 INFO | kvm_vm:1605| -vga std
09/22 07:26:48 INFO | kvm_vm:1605| -rtc
base=utc,clock=host,driftfix=none
09/22 07:26:48 INFO | kvm_vm:1605| -boot order=cdn,once=c,menu=off
09/22 07:26:48 INFO | kvm_vm:1605| -enable-kvm
09/22 07:26:48 INFO | kvm_vm:1605| -enable-kvm
09/22 07:26:48 INFO | kvm_vm:1605| -incoming "fd:43"
Then the state starts to be transferred:
09/22 07:26:50 INFO | kvm_vm:2348| Migrating to fd:migfd_44_1348313208
09/22 07:26:50 DEBUG|kvm_monito:0177| (monitor humanmonitor1) Sending
command 'migrate -d fd:migfd_44_1348313208'
09/22 07:26:52 DEBUG|virt_utils:1537| Waiting for migration to complete
(2.001432 secs)
09/22 07:26:52 DEBUG|kvm_monito:0177| (monitor humanmonitor1) Sending
command 'info migrate'
09/22 07:26:52 DEBUG|kvm_monito:0316| (monitor humanmonitor1) Response
to 'info migrate'
09/22 07:26:52 DEBUG|kvm_monito:0318| (monitor humanmonitor1)
capabilities: xbzrle: off
09/22 07:26:52 DEBUG|kvm_monito:0318| (monitor humanmonitor1)
Migration status: active
09/22 07:26:52 DEBUG|kvm_monito:0318| (monitor humanmonitor1) total
time: 2060 milliseconds
09/22 07:26:52 DEBUG|kvm_monito:0318| (monitor humanmonitor1)
transferred ram: 62363 kbytes
09/22 07:26:52 DEBUG|kvm_monito:0318| (monitor humanmonitor1)
remaining ram: 1873896 kbytes
09/22 07:26:52 DEBUG|kvm_monito:0318| (monitor humanmonitor1) total
ram: 2113920 kbytes
09/22 07:26:52 DEBUG|kvm_monito:0318| (monitor humanmonitor1)
duplicate: 44426 pages
09/22 07:26:52 DEBUG|kvm_monito:0318| (monitor humanmonitor1) normal:
15580 pages
09/22 07:26:52 DEBUG|kvm_monito:0318| (monitor humanmonitor1) normal
bytes: 62320 kbytes
09/22 07:26:54 DEBUG|virt_utils:1537| Waiting for migration to complete
(4.053059 secs)
09/22 07:26:54 DEBUG|kvm_monito:0177| (monitor humanmonitor1) Sending
command 'info migrate'
09/22 07:26:54 DEBUG|kvm_monito:0316| (monitor humanmonitor1) Response
to 'info migrate'
09/22 07:26:54 DEBUG|kvm_monito:0318| (monitor humanmonitor1)
capabilities: xbzrle: off
09/22 07:26:54 DEBUG|kvm_monito:0318| (monitor humanmonitor1)
Migration status: active
09/22 07:26:54 DEBUG|kvm_monito:0318| (monitor humanmonitor1) total
time: 4066 milliseconds
09/22 07:26:54 DEBUG|kvm_monito:0318| (monitor humanmonitor1)
transferred ram: 75414 kbytes
09/22 07:26:54 DEBUG|kvm_monito:0318| (monitor humanmonitor1)
remaining ram: 817052 kbytes
09/22 07:26:54 DEBUG|kvm_monito:0318| (monitor humanmonitor1) total
ram: 2113920 kbytes
09/22 07:26:54 DEBUG|kvm_monito:0318| (monitor humanmonitor1)
duplicate: 305439 pages
09/22 07:26:54 DEBUG|kvm_monito:0318| (monitor humanmonitor1) normal:
18779 pages
09/22 07:26:54 DEBUG|kvm_monito:0318| (monitor humanmonitor1) normal
bytes: 75116 kbytes
09/22 07:26:56 DEBUG|virt_utils:1537| Waiting for migration to complete
(6.058427 secs)
09/22 07:26:56 DEBUG|kvm_monito:0177| (monitor humanmonitor1) Sending
command 'info migrate'
09/22 07:26:56 DEBUG|kvm_monito:0316| (monitor humanmonitor1) Response
to 'info migrate'
09/22 07:26:56 DEBUG|kvm_monito:0318| (monitor humanmonitor1)
capabilities: xbzrle: off
09/22 07:26:56 DEBUG|kvm_monito:0318| (monitor humanmonitor1)
Migration status: active
09/22 07:26:56 DEBUG|kvm_monito:0318| (monitor humanmonitor1) total
time: 6071 milliseconds
09/22 07:26:56 DEBUG|kvm_monito:0318| (monitor humanmonitor1)
transferred ram: 104012 kbytes
09/22 07:26:56 DEBUG|kvm_monito:0318| (monitor humanmonitor1)
remaining ram: 272960 kbytes
09/22 07:26:56 DEBUG|kvm_monito:0318| (monitor humanmonitor1) total
ram: 2113920 kbytes
09/22 07:26:56 DEBUG|kvm_monito:0318| (monitor humanmonitor1)
duplicate: 434349 pages
09/22 07:26:56 DEBUG|kvm_monito:0318| (monitor humanmonitor1) normal:
25897 pages
09/22 07:26:56 DEBUG|kvm_monito:0318| (monitor humanmonitor1) normal
bytes: 103588 kbytes
09/22 07:26:58 DEBUG|virt_utils:1537| Waiting for migration to complete
(8.063601 secs)
09/22 07:26:58 DEBUG|kvm_monito:0177| (monitor humanmonitor1) Sending
command 'info migrate'
09/22 07:26:58 DEBUG|kvm_monito:0316| (monitor humanmonitor1) Response
to 'info migrate'
09/22 07:26:58 DEBUG|kvm_monito:0318| (monitor humanmonitor1)
capabilities: xbzrle: off
09/22 07:26:58 DEBUG|kvm_monito:0318| (monitor humanmonitor1)
Migration status: active
09/22 07:26:58 DEBUG|kvm_monito:0318| (monitor humanmonitor1) total
time: 8076 milliseconds
09/22 07:26:58 DEBUG|kvm_monito:0318| (monitor humanmonitor1)
transferred ram: 178196 kbytes
09/22 07:26:58 DEBUG|kvm_monito:0318| (monitor humanmonitor1)
remaining ram: 198380 kbytes
09/22 07:26:58 DEBUG|kvm_monito:0318| (monitor humanmonitor1) total
ram: 2113920 kbytes
09/22 07:26:58 DEBUG|kvm_monito:0318| (monitor humanmonitor1)
duplicate: 434450 pages
09/22 07:26:58 DEBUG|kvm_monito:0318| (monitor humanmonitor1) normal:
44443 pages
09/22 07:26:58 DEBUG|kvm_monito:0318| (monitor humanmonitor1) normal
bytes: 177772 kbytes
09/22 07:27:00 DEBUG|virt_utils:1537| Waiting for migration to complete
(10.068405 secs)
09/22 07:27:00 DEBUG|kvm_monito:0177| (monitor humanmonitor1) Sending
command 'info migrate'
09/22 07:27:00 DEBUG|kvm_monito:0316| (monitor humanmonitor1) Response
to 'info migrate'
09/22 07:27:00 DEBUG|kvm_monito:0318| (monitor humanmonitor1)
capabilities: xbzrle: off
09/22 07:27:00 DEBUG|kvm_monito:0318| (monitor humanmonitor1)
Migration status: active
09/22 07:27:00 DEBUG|kvm_monito:0318| (monitor humanmonitor1) total
time: 10083 milliseconds
09/22 07:27:00 DEBUG|kvm_monito:0318| (monitor humanmonitor1)
transferred ram: 263581 kbytes
09/22 07:27:00 DEBUG|kvm_monito:0318| (monitor humanmonitor1)
remaining ram: 109596 kbytes
09/22 07:27:00 DEBUG|kvm_monito:0318| (monitor humanmonitor1) total
ram: 2113920 kbytes
09/22 07:27:00 DEBUG|kvm_monito:0318| (monitor humanmonitor1)
duplicate: 435357 pages
09/22 07:27:00 DEBUG|kvm_monito:0318| (monitor humanmonitor1) normal:
65789 pages
09/22 07:27:00 DEBUG|kvm_monito:0318| (monitor humanmonitor1) normal
bytes: 263156 kbytes
09/22 07:27:02 DEBUG|virt_utils:1537| Waiting for migration to complete
(12.074452 secs)
09/22 07:27:02 DEBUG|kvm_monito:0177| (monitor humanmonitor1) Sending
command 'info migrate'
09/22 07:27:02 DEBUG|kvm_monito:0316| (monitor humanmonitor1) Response
to 'info migrate'
09/22 07:27:02 DEBUG|kvm_monito:0318| (monitor humanmonitor1)
capabilities: xbzrle: off
09/22 07:27:02 DEBUG|kvm_monito:0318| (monitor humanmonitor1)
Migration status: active
09/22 07:27:02 DEBUG|kvm_monito:0318| (monitor humanmonitor1) total
time: 12088 milliseconds
09/22 07:27:02 DEBUG|kvm_monito:0318| (monitor humanmonitor1)
transferred ram: 348521 kbytes
09/22 07:27:02 DEBUG|kvm_monito:0318| (monitor humanmonitor1)
remaining ram: 62564 kbytes
09/22 07:27:02 DEBUG|kvm_monito:0318| (monitor humanmonitor1) total
ram: 2113920 kbytes
09/22 07:27:02 DEBUG|kvm_monito:0318| (monitor humanmonitor1)
duplicate: 439474 pages
09/22 07:27:02 DEBUG|kvm_monito:0318| (monitor humanmonitor1) normal:
87023 pages
09/22 07:27:02 DEBUG|kvm_monito:0318| (monitor humanmonitor1) normal
bytes: 348092 kbytes
09/22 07:27:02 INFO | aexpect:0786| [qemu output] qemu: VQ 0 size 0x80
Guest index 0x2d3 inconsistent with Host index 0x8: delta 0x2cb
09/22 07:27:02 INFO | aexpect:0786| [qemu output] qemu: warning: error
while loading state for instance 0x0 of device '0000:00:04.0/virtio-blk'
09/22 07:27:02 INFO | aexpect:0786| [qemu output] load of migration failed
09/22 07:27:02 INFO | aexpect:0786| [qemu output] (Process terminated
with status 0)
So at some point qemu throws this message and exits cleanly. I did
notice we're adding -enable-kvm twice, will fix it, though I doubt it's
the source of the problem.
Regards,
Lucas
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html