Re: Lost of performance over AMD-V with KVM?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thu, Jul 11, 2013 at 04:28:22PM +0200, Victor fernandez wrote:
> Hi Gleb,
> 
> El 11/07/13 16:07, Gleb Natapov escribió:
> >On Thu, Jul 11, 2013 at 02:52:40PM +0200, Victor fernandez wrote:
> >>Hi Gleb,
> >>
> >>     you were right, we update the qemu-kvm to 1.0 and now
> >>     the S.L. instance start with new cpu flags (for instance:
> >>3dnowext,3dnow).
> >>
> >>     But we have bad performance still, around 17% of lost in
> >>     our AMD processors. Let me explain better, we are running
> >>     three different hight energy physic softwares. And in 2 of them
> >>     we don't have any problem with the performances, 4-5% of lost.
> >>     But in the other one (GAUSS, LHCb software), and only in
> >>     AMD-V virtualization we have arount 17%-33% lost performance
> >>     depending on the processor number. But Gauss software in
> >>     Intel processor are running with an acceptable performance.
> >>
> >>     And we suspected that could be due to cpu flags, but now we are not
> >>     sure about that because now 3dnowext,3dnow are included
> >>     and the performances are still bad. But these flags are not yet
> >>included
> >>     with qemu-kvm_1.0.
> >>
> >>>ht > rdtscp > constant_tsc > nopl > nonstop_tsc > amd_dcm >
> >>monitor > extapic > ibs > skinit > wdt > nodeid_msr > hw_pstate >
> >>lbrv > svm_lock > pausefilter
> >>
> >>     - Do you think that these flags could be the problem?
> >Non of them are related to an instruction set relevant for performance.
> >amd_dcm is a synthetic flag (meaning that it does not correspond to any
> >real cpuid) that says that your machine is NUMA. Can you bind you VM to
> >single numa node with taskset/numactl?
> I have run the software in the Intel machine which doesn't have
> amd_dcm flag, and with the same parameters 2Gb Mem, 1Core,
> and the results were ok, around 7-9% of lost:
> 
amd_dcm is AMD specific. Intel machine may or may not be NUMA.

> Intel flags (biprocessors dual core, 2 Xeon(R) CPU 5160 @ 3.00GHz):
If there is only one dual core cpu there it is definitely not NUMA.

> 
> fpu vme de pse tsc msr pae mce cx8 apic sep mtrr
> pge mca cmov pat pse36 clflush dts acpi mmx fxsr
> sse sse2 ss ht tm pbe syscall nx lm constant_tsc
> arch_perfmon pebs bts rep_good aperfmperf pni
> dtes64 monitor ds_cpl vmx est tm2 ssse3 cx16 xtpr
> pdcm dca lahf_lm tpr_shadow
> 
> >
> >I see that you assign only 2G to your VM, may be this is not enough?
> >How much memory host has?
> In this case the host memory is 16Gb, and 2 Gb should be enough for
> running the soft in the VM,
> on the other hand this is for a paper in Journal of Grid Computing, and
> we did a lot of tests with more memory-core, 2Gb per Core / 4Gb per
> Core etc,
> and the results in AMD tech were the same, with low performance. Finally,
> we have the Intel results which is enough for our article, but we want to
> understand what are happening in the virtualization of our AMD nodes.
> 
What I also notice now is that you do not have two level paging on AMD
(no npt flag in cpuinfo), but then you do not have it on Intel either
(no ept flag). To get optimal performance from virtualization you should
use HW with those features.

> Thanks a lot Gleb,
>     Víctor Fdez.
> 
> >
> >>     - Do you have any idea what could it be?
> >>
> >>     Thanks in advance Gleb,
> >>         Víctor Fdez.
> >>
> >>
> >>El 09/07/13 18:08, Gleb Natapov escribió:
> >>>On Tue, Jul 09, 2013 at 05:30:26PM +0200, Victor fernandez wrote:
> >>>>Hi Gleb,
> >>>>
> >>>>     root@vfalbor-desktop:~# uname -a
> >>>>*Linux vfalbor-desktop 2.6.32-24-generic *#43-Ubuntu SMP Thu Sep 16
> >>>>14:58:24 UTC 2010 x86_64 GNU/Linux
> >>>>
> >>>>     root@vfalbor-desktop:~# dpkg --list | grep qemu
> >>>>     ii  qemu-common *0.12.3+noroms-0ubuntu9.21 * qemu common
> >>>>functionality (bios, documentati
> >>>>     ri  qemu-kvm *0.12.3+noroms-0ubuntu9.21*
> >>>>Full virtualization on i386 and amd64 hardwa
> >>>>
> >>>Ugh, those are ancient and vendor packages. I thought you were running
> >>>Scientific Linux 6 as a host too which also has vendor specific
> >>>packages, but at least something that I can easily look into :)
> >>>Can you reproduce with upstream kernel/qemu? If it works on upstream you
> >>>can open bug in Ubuntu.
> >>>
> >>>>     One of the things that we did, we try to update libvirt to the
> >>>>last versions
> >>>>     because we are using for our cloud the version of ubuntu 10.04 and the
> >>>>     last stable version of libvirt was 0.7.5-5ubuntu27.23, but when
> >>>>we try to
> >>>>     get the capabilities, we get the following:
> >>>This is definitely not libvirt fault. This is either kernel or qemu.
> >>>Looking into git history 3dnow/3dnowext support was introduce into the
> >>>kernel kvm component before 2.6.32 though.
> >>>
> >>>>     root@vfalbor-desktop:~/# virsh capabilities | grep 3dnow
> >>>>       <feature name='*3dnowprefetch*'/>
> >>>>     root@vfalbor-desktop:~/#
> >>>>
> >>>>     but in the cpu_map.xml file we find the 3dnow and 3dnowext for
> >>>>our architecture:
> >>>>
> >>>>     root@vfalbor-desktop:~/# cat /usr/share/libvirt/cpu_map.xml |
> >>>>grep 3dnow
> >>>>             <feature name='*3dnowext*'> <!-- CPUID_EXT2_3DNOWEXT -->
> >>>>             <feature name='3dnow'> <!-- CPUID_EXT2_3DNOW -->
> >>>>             <feature name='3dnowprefetch'> <!--
> >>>>CPUID_EXT3_3DNOWPREFETCH -->
> >>>>
> >>>>     Thanks Gleb,
> >>>>         Víctor Fdez.
> >>>>
> >>>>El 09/07/13 17:16, Gleb Natapov escribió:
> >>>>>On Tue, Jul 09, 2013 at 05:07:57PM +0200, Victor fernandez wrote:
> >>>>>>Yep,
> >>>>>>
> >>>>>>     -> this is for the host:
> >>>>>>
> >>>>>>     root@vfalbor-desktop:~# cat /proc/cpuinfo | grep flags | uniq
> >>>>>>     flags        : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr
> >>>>>>pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx
> >>>>>>mmxext fxsr_opt pdpe1gb rdtscp lm 3dnowext 3dnow constant_tsc
> >>>>>>rep_good nonstop_tsc extd_apicid amd_dcm pni monitor cx16 popcnt
> >>>>>>lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse
> >>>>>>3dnowprefetch osvw ibs skinit wdt nodeid_msr
> >>>>>>
> >>>>>>     -> this is for the VM:
> >>>>>>     [root@sl6 ~]# cat /proc/cpuinfo | grep flags | uniq
> >>>>>>     flags        : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr
> >>>>>>pge mca cmov pat pse36 clflush mmx fxsr sse sse2 syscall nx mmxext
> >>>>>>fxsr_opt pdpe1gb lm rep_good extd_apicid pni cx16 popcnt lahf_lm
> >>>>>>cmp_legacy cr8_legacy abm sse4a misalignsse 3dnowprefetch
> >>>>>>
> >>>>>Hmm, what is your kernel and qemu versions?
> >>>>>
> >>>>>>El 09/07/13 17:00, Gleb Natapov escribió:
> >>>>>>>On Tue, Jul 09, 2013 at 04:57:13PM +0200, Victor fernandez wrote:
> >>>>>>>>     2.- with the "-cpu host" parameter:
> >>>>>>>>
> >>>>>>>>/usr/bin/kvm *-cpu host* -m 2048 -smp 1 -name i-2-11-VM -monitor
> >>>>>>>>telnet:127.0.0.1:9941,server,nowait -boot c /mnt/b07f7f29-7b9b-3d4b-b170-77876d10e7b1/134bc689-1207-487c-ae95-6c0dc2e0f285
> >>>>>>>>-parallel none -usb -usbdevice tablet -vnc :0 -vga cirrus -net
> >>>>>>>>nic,macaddr=06:00:90:00:00:08,vlan=0,model=e1000,name=e1000.0 -net
> >>>>>>>>tap,script=/usr/bin/qemu-ifup,vlan=0,name=tap.0
> >>>>>>>>
> >>>>>>>With this command line can you provide output of /proc/cpuinfo on the
> >>>>>>>host and the guest?
> >>>>>>>
> >>>>>>>--
> >>>>>>>			Gleb.
> >>>>>>>--
> >>>>>>>To unsubscribe from this list: send the line "unsubscribe kvm" in
> >>>>>>>the body of a message to majordomo@xxxxxxxxxxxxxxx
> >>>>>>>More majordomo info at  http://vger.kernel.org/majordomo-info.html
> >>>>>--
> >>>>>			Gleb.
> >>>--
> >>>			Gleb.
> >--
> >			Gleb.
> 

--
			Gleb.
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html




[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]
  Powered by Linux