RE: Guest performance is reduced after live migration

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Here's what I have found so far...

Ubuntu 10.04 performed within +/- 2% so I'm not including those results.    It seems that it's more of an issue of disk access, so I'm going to run some more disk specific benchmarks and I'll those results later.   I'd be happy to run any other perf tests that might help track down the problem as well.

Qemu command line:
/usr/bin/kvm -name one-3 -S -M pc-1.2 -cpu Westmere -enable-kvm -m 73728 -smp 16,sockets=2,cores=8,threads=1 -uuid 3ebea329-cfbb-3447-0b49-b41e078a3ede -no-user-config -nodefaults -chardev socket,id=charmonitor,path=/var/lib/libvirt/qemu/one-3.monitor,server,nowait -mon chardev=charmonitor,id=monitor,mode=control -rtc base=utc,driftfix=slew -no-kvm-pit-reinjection -no-shutdown -device piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -drive file=/var/lib/one//datastores/0/3/disk.0,if=none,id=drive-virtio-disk0,format=raw,cache=none -device virtio-blk-pci,scsi=off,bus=pci.0,addr=0x4,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1 -drive file=/var/lib/one//datastores/0/3/disk.1,if=none,id=drive-ide0-0-0,readonly=on,format=raw -device ide-cd,bus=ide.0,unit=0,drive=drive-ide0-0-0,id=ide0-0-0 -netdev tap,fd=23,id=hostnet0,vhost=on,vhostfd=25 -device virtio-net-pci,netdev=hostnet0,id=net0,mac=02:00:0a:64:02:fe,bus=pci.0,addr=0x3 -vnc 0.0.0.0:3,password -vga cirrus -incoming tcp:0.0.0.0:49155 -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x5

Initial Boot Benchmarks
===========================
Huge Page Usage
Physical Host: 2627584kB
QEMU Process: 2478080kB

Using phoronix pts/compuational
ImageMagick - 31.54s
Linux Kernel 3.1 - 43.91s
Mplayer - 30.49s
PHP - 22.25s


After Live Migration Benchmarks
===========================
Huge Page Usage
Physical Host: 3174400kB
QEMU Process: 3151872kB

Using phoronix pts/compilation
ImageMagick - 43.29s
Linux Kernel 3.1 - 76.67s
Mplayer - 45.41s
PHP - 29.1s


	
-----Original Message-----
From: kvm-owner@xxxxxxxxxxxxxxx [mailto:kvm-owner@xxxxxxxxxxxxxxx] On Behalf Of Mark Petersen
Sent: Wednesday, January 02, 2013 7:32 PM
To: Marcelo Tosatti
Cc: kvm@xxxxxxxxxxxxxxx; Shouta.Uehara@xxxxxxxxxxxxxxx
Subject: RE: Guest performance is reduced after live migration

I believe I disabled huge pages on the guest and host previously, but I'll test a few scenarios and look at transparent hugepage usage specifically again over the next couple days and report back.


Below is a command line used for testing.

/usr/bin/kvm -> qemu-x86_64

/usr/bin/kvm -name one-483 -S -M pc-1.2 -cpu Westmere -enable-kvm -m 73728 -smp 16,sockets=2,cores=8,threads=1 -uuid a844146a-0d72-a661-fe6c-cb6b7a4ba240 -no-user-config -nodefaults -chardev socket,id=charmonitor,path=/var/lib/libvirt/qemu/one-483.monitor,server,nowait -mon chardev=charmonitor,id=monitor,mode=control -rtc base=utc -no-shutdown -device piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -drive file=/var/lib/one//datastores/0/483/disk.0,if=none,id=drive-virtio-disk0,format=raw,cache=none -device virtio-blk-pci,scsi=off,bus=pci.0,addr=0x4,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1 -drive file=/var/lib/one//datastores/0/483/disk.1,if=none,id=drive-ide0-0-0,readonly=on,format=raw -device ide-cd,bus=ide.0,unit=0,drive=drive-ide0-0-0,id=ide0-0-0 -netdev tap,fd=26,id=hostnet0,vhost=on,vhostfd=27 -device virtio-net-pci,netdev=hostnet0,id=net0,mac=02:00:0a:02:02:4b,bus=pci.0,addr=0x3 -vnc 0.0.0.0 -vga cirrus -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x5


-----Original Message-----
From: Marcelo Tosatti [mailto:mtosatti@xxxxxxxxxx]
Sent: Wednesday, January 02, 2013 6:49 PM
To: Mark Petersen
Cc: kvm@xxxxxxxxxxxxxxx; Shouta.Uehara@xxxxxxxxxxxxxxx
Subject: Re: Guest performance is reduced after live migration

On Wed, Jan 02, 2013 at 11:56:11PM +0000, Mark Petersen wrote:
> I don't think it's related to huge pages...
> 
> I was using phoronix-test-suite to run benchmarks.  The 'batch/compilation' group shows the slowdown for all tests, the 'batch/computation' show some performance degradation, but not nearly as significant.

Huge pages in the host, for the qemu-kvm process, i mean.
Without huge pages backing guest memory in the host, 4k EPT TLB entries will be used instead of 2MB EPT TLB entries.

> You could probably easily test this way without phoronix -  Start a VM with almost nothing running.  Download mainline Linux kernel, compile.  This takes about 45 seconds in my case (72GB memory, 16 virtual CPUs, idle physical host running this VM.)  Run as many times as you want, still takes ~45 seconds.
> 
> Migrate to a new idle host, kernel compile now takes ~90 seconds, wait
> 3 hours (should give khugepaged a change to do its thing I imagine),

Please verify its the case (by checking how much memory is backed by hugepages).

http://www.mjmwired.net/kernel/Documentation/vm/transhuge.txt
"Monitoring Usage".


> kernel compiles still take 90 seconds.  Reboot virtual machine (run 'shutdown -r now', reboot, whatever.)  First compile will take ~45 seconds after reboot.  You don't even need to reset/destroy/shutdown the VM, just a reboot in the guest fixes the issue.

What is the qemu command line?

> I'm going to test more with qemu-kvm 1.3 tomorrow as I have a new/dedicated lab setup and recently built the 1.3 code base.  I'd be happy to run any test that would help in diagnosing the real issue here, I'm just not sure how to best diagnose this issue.
> 
> Thanks,
> Mark
>  
> -----Original Message-----
> 
> Can you describe more details of the test you are performing? 
> 
> If transparent hugepages are being used then there is the possibility that there has been no time for khugepaged to back guest memory with huge pages, in the destination (don't recall the interface for retrieving number of hugepages for a given process, probably somewhere in /proc/pid/).
> 
> On Wed, Dec 19, 2012 at 12:43:37AM +0000, Mark Petersen wrote:
> > Hello KVM,
> > 
> > I'm seeing something similar to this (http://thread.gmane.org/gmane.comp.emulators.kvm.devel/100592) as well when doing live migrations on Ubuntu 12.04 (Host and Guest) with a backported libvirt 1.0 and qemu-kvm 1.2 (improved performance for live migrations on guests with large memory guests is great!)  The default libvirt  0.9.8 and qemu-kvm 1.0 have the same issue.
> > 
> > Kernel is 3.2.0-34-generic and eglicb 2.15 on both host/guest.  I'm seeing similar issues with both virtio and ide bus.  Hugetblfs is not used, but transparent hugepages are.  Host machines are dual core Xeon E5-2660 processors.  I tried disabling EPT but that doesn't seem to make a difference so I don't think it's a requirement to reproduce.
> > 
> > If I use Ubuntu 10.04 guest with eglibc 2.11 and any of these kernels I don't seem to have the issue:
> > 
> > linux-image-2.6.32-32-server - 2.6.32-32.62 
> > linux-image-2.6.32-38-server - 2.6.32-38.83 
> > linux-image-2.6.32-43-server - 2.6.32-43.97 
> > linux-image-2.6.35-32-server - 2.6.35-32.68~lucid1 
> > linux-image-2.6.38-16-server - 2.6.38-16.67~lucid1 
> > linux-image-3.0.0-26-server  - 3.0.0-26.43~lucid1
> > linux-image-3.2-5 - mainline 3.2.5 kernel
> > 
> > I'm guess it's a libc issue (or at least a libc change causing the issue) as it doesn't seem to a be kernel related.
> > 
> > I'll try other distributions as a guest (probably Debian/Ubuntu) with newer libc's and see if I can pinpoint the issue to a libc version.  Any other ideas?
> > 
> > Shared disk backend is clvm/LV via FC to EMC SAN, not sure what else might be relevant.
> > 
> > Thanks,
> > Mark
> > 
> > 
> > ______________________________________________
> > 
> > See http://www.peak6.com/email_disclaimer/ for terms and conditions 
> > related to this email
> > --
> > To unsubscribe from this list: send the line "unsubscribe kvm" in 
> > the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo 
> > info at http://vger.kernel.org/majordomo-info.html
--
To unsubscribe from this list: send the line "unsubscribe kvm" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at  http://vger.kernel.org/majordomo-info.html
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]
  Powered by Linux