RE: [Qemu-devel] vm performance degradation after kvm live migration or save-restore with ETP enabled

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



> Hi,
> 
> Am 11.07.2013 11:36, schrieb Zhanghaoyu (A):
> > I met similar problem to these, while performing live migration or
> save-restore test on the kvm platform (qemu:1.4.0, host:suse11sp2,
> guest:suse11sp2), running tele-communication software suite in guest,
> > https://lists.gnu.org/archive/html/qemu-devel/2013-05/msg00098.html
> > http://comments.gmane.org/gmane.comp.emulators.kvm.devel/102506
> > http://thread.gmane.org/gmane.comp.emulators.kvm.devel/100592
> > https://bugzilla.kernel.org/show_bug.cgi?id=58771
> >
> > After live migration or virsh restore [savefile], one process's CPU
> utilization went up by about 30%, resulted in throughput degradation of
> this process.
> > oprofile report on this process in guest,
> > pre live migration:
> 
> So far we've been unable to reproduce this with a pure qemu-kvm /
> qemu-system-x86_64 command line on several EPT machines, whereas for
> virsh it was reported as confirmed. Can you please share the resulting
> QEMU command line from libvirt logs or process list?
qemu command line from /var/log/libvirt/qemu/[domain].log, 
LC_ALL=C PATH=/sbin:/usr/sbin:/usr/local/sbin:/root/bin:/usr/local/bin:/usr/bin:/bin:/usr/X11R6/bin:/usr/games:/usr/lib/mit/bin:/usr/lib/mit/sbin HOME=/root USER=root LOGNAME=root QEMU_AUDIO_DRV=none /usr/local/bin/qemu-system-x86_64 -name CSC2 -S -M pc-0.12 -cpu qemu32 -enable-kvm -m 12288 -smp 4,sockets=4,cores=1,threads=1 -uuid 76e03575-a3ad-589a-e039-40160274bb97 -no-user-config -nodefaults -chardev socket,id=charmonitor,path=/var/lib/libvirt/qemu/CSC2.monitor,server,nowait -mon chardev=charmonitor,id=monitor,mode=control -rtc base=localtime -no-shutdown -device piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -drive file=/opt/ne/vm/CSC2.img,if=none,id=drive-virtio-disk0,format=raw,cache=none -device virtio-blk-pci,scsi=off,bus=pci.0,addr=0x8,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1 -netdev tap,fd=20,id=hostnet0,vhost=on,vhostfd=22 -device virtio-net-pci,netdev=hostnet0,id=net0,mac=00:e0:fc:00:0f:01,bus=pci.0,addr=0x3,bootindex=2 -netdev tap,fd=23,id=hostnet1,vhost=on,vhostfd=24 -device virtio-net-pci,netdev=hostnet1,id=net1,mac=00:e0:fc:01:0f:01,bus=pci.0,addr=0x4 -netdev tap,fd=25,id=hostnet2,vhost=on,vhostfd=26 -device virtio-net-pci,netdev=hostnet2,id=net2,mac=00:e0:fc:02:0f:01,bus=pci.0,addr=0x5 -netdev tap,fd=27,id=hostnet3,vhost=on,vhostfd=28 -device virtio-net-pci,netdev=hostnet3,id=net3,mac=00:e0:fc:03:0f:01,bus=pci.0,addr=0x6 -netdev tap,fd=29,id=hostnet4,vhost=on,vhostfd=30 -device virtio-net-pci,netdev=hostnet4,id=net4,mac=00:e0:fc:0a:0f:01,bus=pci.0,addr=0x7 -netdev tap,fd=31,id=hostnet5,vhost=on,vhostfd=32 -device virtio-net-pci,netdev=hostnet5,id=net5,mac=00:e0:fc:0b:0f:01,bus=pci.0,addr=0x9 -chardev pty,id=charserial0 -device isa-serial,chardev=charserial0,id=serial0 -vnc *:1 -k en-us -vga cirrus -device i6300esb,id=watchdog0,bus=pci.0,addr=0xb -watchdog-action poweroff -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0xa
> 
> Are both host and guest kernel at 3.0.80 (latest SLES updates)?
No, both host and guest are just raw sles11-sp2-64-GM, kernel version: 3.0.13-0.27.

Thanks,
Zhang Haoyu
> 
> Thanks,
> Andreas
> 
> --
> SUSE LINUX Products GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany
> GF: Jeff Hawn, Jennifer Guild, Felix Imendörffer; HRB 16746 AG Nürnberg
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html




[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]
  Powered by Linux