Re: Performace data when running Windows VMs

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Wed, 2009-08-26 at 19:26 +0300, Avi Kivity wrote:
> On 08/26/2009 07:14 PM, Andrew Theurer wrote:
> > On Wed, 2009-08-26 at 18:44 +0300, Avi Kivity wrote:
> >    
> >> On 08/26/2009 05:57 PM, Andrew Theurer wrote:
> >>      
> >>> I recently gathered some performance data when running Windows Server
> >>> 2008 VMs, and I wanted to share it here.  There are 12 Windows
> >>> Server2008 64-bit VMs (1 vcpu, 2 GB) running which handle the concurrent
> >>> execution of 6 J2EE type benchmarks.  Each benchmark needs a App VM and
> >>> a Database VM.  The benchmark clients inject a fixed rate of requests
> >>> which yields X% CPU utilization on the host.  A different hypervisor was
> >>> compared; KVM used about 60% more CPU cycles to complete the same amount
> >>> of work.  Both had their hypervisor specific paravirt IO drivers in the
> >>> VMs.
> >>>
> >>> Server is a 2 socket Core/i7, SMT off, with 72 GB memory
> >>>
> >>>        
> >> Did you use large pages?
> >>      
> > Yes.
> >    
> 
> The stats show 'largepage = 12'.  Something's wrong.  There's a commit 
> (7736d680) that's supposed to fix largepage support for kvm-87, maybe 
> it's incomplete.

How strange.  /proc/meminfo showed that almost all of the pages were
used:

HugePages_Total:   12556
HugePages_Free:      220
HugePages_Rsvd:        0
HugePages_Surp:        0
Hugepagesize:       2048 kB

I just assumed they were used properly.  Maybe not.

> >>> I/O on the host was not what I would call very high:  outbound network
> >>> averaged at 163 Mbit/s inbound was 8 Mbit/s, while disk read ops was
> >>> 243/sec and write ops was 561/sec
> >>>
> >>>        
> >> What was the disk bandwidth used?  Presumably, direct access to the
> >> volume with cache=off?
> >>      
> > 2.4 MB/sec write, 0.6MB/sec read, cache=none
> > The VMs' boot disks are IDE, but apps use their second disk which is
> > virtio.
> >    
> 
> Chickenfeed.
> 
> Do the network stats include interguest traffic?  I presume *all* of the 
> traffic was interguest.

Sar network data:

>                  IFACE   rxpck/s   txpck/s    rxkB/s    txkB/s
> Average:           lo      0.00      0.00      0.00      0.00 
> Average:         usb0      0.39      0.19      0.02      0.01 
> Average:         eth0   2968.83   5093.02    340.13   6966.64
> Average:         eth1   2992.92   5124.08    342.75   7008.53 
> Average:         eth2   1455.53   2500.63    167.45   3421.64 
> Average:         eth3   1500.59   2574.36    171.98   3524.82 
> Average:          br0      2.41      0.95      0.32      0.13 
> Average:          br1      1.52      0.00      0.20      0.00 
> Average:          br2      1.52      0.00      0.20      0.00 
> Average:          br3      1.52      0.00      0.20      0.00 
> Average:          br4      0.00      0.00      0.00      0.00 
> Average:         tap3    669.38    708.07    290.89    140.81 
> Average:       tap109    678.53    723.58    294.07    143.31 
> Average:       tap215    673.20    711.47    291.99    141.78 
> Average:       tap321    675.26    719.33    293.01    142.37 
> Average:        tap27    679.23    729.90    293.86    143.60 
> Average:       tap133    680.17    734.08    294.33    143.85 
> Average:         tap2   1002.24   2214.19   3458.54    457.95 
> Average:       tap108   1021.85   2246.53   3491.02    463.48 
> Average:       tap214   1002.81   2195.22   3411.80    457.28 
> Average:       tap320   1017.43   2241.49   3508.20    462.54 
> Average:        tap26   1028.52   2237.98   3483.84    462.53 
> Average:       tap132   1034.05   2240.89   3493.37    463.32 

tap0-99 go to eth0, 100-199 to eth1, 200-299 to eth2, 300-399 to eth4.
There is some inter-guest traffic between VM pairs (like taps 2&3,
108&119, etc.) but not that significant.

> 
> >> linux-aio should help reduce cpu usage.
> >>      
> > I assume this is in a newer version of Qemu?
> >    
> 
> No, posted and awaiting merge.
> 
> >> Could it be that Windows uses the debug registers?  Maybe we're
> >> incorrectly deciding to switch them.
> >>      
> > I was wondering about that.  I was thinking of just backing out the
> > support for debugregs and see what happens.
> >
> > Did the up/down_read seem kind of high?  Are we doing a lock of locking?
> >    
> 
> It is.  We do.  Marcelo made some threats to remove this lock.

Thanks,

-Andrew


--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html

[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]
  Powered by Linux