KVM performance vs. Xen

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



I wanted to share some performance data for KVM and Xen.  I thought it
would be interesting to share some performance results especially
compared to Xen, using a more complex situation like heterogeneous
server consolidation.

The Workload:
The workload is one that simulates a consolidation of servers on to a
single host.  There are 3 server types: web, imap, and app (j2ee).  In
addition, there are other "helper" servers which are also consolidated:
a db server, which helps out with the app server, and an nfs server,
which helps out with the web server (a portion of the docroot is nfs
mounted).  There is also one other server that is simply idle.  All 6
servers make up one set.  The first 3 server types are sent requests,
which in turn may send requests to the db and nfs helper servers.  The
request rate is throttled to produce a fixed amount of work.  In order
to increase utilization on the host, more sets of these servers are
used.  The clients which send requests also have a response time
requirement which is monitored.  The following results have passed the
response time requirements.

The host hardware:
A 2 socket, 8 core Nehalem with SMT, and EPT enabled, lots of disks, 4 x
1 GB Ethenret

The host software:
Both Xen and KVM use the same host Linux OS, SLES11.  KVM uses the
2.6.27.19-5-default kernel and Xen uses the 2.6.27.19-5-xen kernel.  I
have tried 2.6.29 for KVM, but results are actually worse.  KVM modules
are rebuilt with kvm-85.  Qemu is also from kvm-85.  Xen version is
"3.3.1_18546_12-3.1".

The guest software:
All guests are RedHat 5.3.  The same disk images are used but different
kernels. Xen uses the RedHat Xen kernel and KVM uses 2.6.29 with all
paravirt build options enabled.  Both use PV I/O drivers.  Software
used: Apache, PHP, Java, Glassfish, Postgresql, and Dovecot.

Hypervisor configurations:
Xen guests use "phy:" for disks
KVM guests use "-drive" for disks with cache=none
KVM guests are backed with large pages
Memory and CPU sizings are different for each guest type, but a
particular guest's sizings are the same for Xen and KVM

The test run configuration:
There are 4 sets of servers used, so that's 24 guests total (4 idle
ones, 20 active ones).

Test Results:
The throughput is equal in these tests, as the clients throttle the work
(this is assuming you don't run out of a resource on the host).  What's
telling is the CPU used to do the same amount of work:

Xen:  52.85%
KVM:  66.93%

So, KVM requires 66.93/52.85 = 26.6% more CPU to do the same amount of
work. Here's the breakdown:

total    user    nice  system     irq softirq   guest
66.90    7.20    0.00   12.94    0.35    3.39   43.02

Comparing guest time to all other busy time, that's a 23.88/43.02 = 55%
overhead for virtualization.  I certainly don't expect it to be 0, but
55% seems a bit high.  So, what's the reason for this overhead?  At the
bottom is oprofile output of top functions for KVM.  Some observations:

1) I'm seeing about 2.3% in scheduler functions [that I recognize].
Does that seems a bit excessive?
2) cpu_physical_memory_rw due to not using preadv/pwritev?
3) vmx_[save|load]_host_state: I take it this is from guest switches?
We have 180,000 context switches a second.  Is this more than expected?
I wonder if schedstats can show why we context switch (need to let
someone else run, yielded, waiting on io, etc).


samples %       name                            app
385914891       61.3122 kvm-intel.ko            vmx_vcpu_run
11413793        1.8134  libc-2.9.so             /lib64/libc-2.9.so
8943054 1.4208  qemu-system-x86_64              cpu_physical_memory_rw
6877593 1.0927  kvm.ko                          kvm_arch_vcpu_ioctl_run
6469799 1.0279  qemu-system-x86_64              phys_page_find_alloc
5080474 0.8072  vmlinux-2.6.27.19-5-default     copy_user_generic_string
4154467 0.6600  kvm-intel.ko                    __vmx_load_host_state
3991060 0.6341  vmlinux-2.6.27.19-5-default     schedule
3455331 0.5490  kvm-intel.ko                    vmx_save_host_state
2582344 0.4103  vmlinux-2.6.27.19-5-default     find_busiest_group
2509543 0.3987  qemu-system-x86_64              main_loop_wait
2457476 0.3904  vmlinux-2.6.27.19-5-default     kfree
2395296 0.3806  kvm.ko                          kvm_set_irq
2385298 0.3790  vmlinux-2.6.27.19-5-default     fget_light
2229755 0.3543  vmlinux-2.6.27.19-5-default     __switch_to
2178739 0.3461  bnx2.ko                         bnx2_rx_int
2156418 0.3426  vmlinux-2.6.27.19-5-default     complete_signal
1854497 0.2946  qemu-system-x86_64              virtqueue_get_head
1833823 0.2913  vmlinux-2.6.27.19-5-default     try_to_wake_up
1816954 0.2887  qemu-system-x86_64              cpu_physical_memory_map
1776548 0.2822  oprofiled                       find_kernel_image
1737294 0.2760  vmlinux-2.6.27.19-5-default     kmem_cache_alloc
1662346 0.2641  qemu-system-x86_64              virtqueue_avail_bytes
1651070 0.2623  vmlinux-2.6.27.19-5-default     do_select
1643139 0.2611  vmlinux-2.6.27.19-5-default     update_curr
1640495 0.2606  vmlinux-2.6.27.19-5-default     kmem_cache_free
1606493 0.2552  libpthread-2.9.so               pthread_mutex_lock
1549536 0.2462  qemu-system-x86_64              lduw_phys
1535539 0.2440  vmlinux-2.6.27.19-5-default     tg_shares_up
1438468 0.2285  vmlinux-2.6.27.19-5-default     mwait_idle
1316461 0.2092  vmlinux-2.6.27.19-5-default     __down_read
1282486 0.2038  vmlinux-2.6.27.19-5-default     native_read_tsc
1226069 0.1948  oprofiled                       odb_update_node
1224551 0.1946  vmlinux-2.6.27.19-5-default     sched_clock_cpu
1222684 0.1943  tun.ko                          tun_chr_aio_read
1194034 0.1897  vmlinux-2.6.27.19-5-default     task_rq_lock
1186129 0.1884  kvm.ko                          x86_decode_insn
1131644 0.1798  bnx2.ko                         bnx2_start_xmit
1115575 0.1772  vmlinux-2.6.27.19-5-default     enqueue_hrtimer
1044329 0.1659  vmlinux-2.6.27.19-5-default     native_sched_clock
988546  0.1571  vmlinux-2.6.27.19-5-default     fput
980615  0.1558  vmlinux-2.6.27.19-5-default     __up_read
942270  0.1497  qemu-system-x86_64              kvm_run
925076  0.1470  kvm-intel.ko                    vmcs_writel
889220  0.1413  vmlinux-2.6.27.19-5-default     dev_queue_xmit
884786  0.1406  kvm.ko                          kvm_apic_has_interrupt
880421  0.1399  librt-2.9.so                    /lib64/librt-2.9.so
880306  0.1399  vmlinux-2.6.27.19-5-default     nf_iterate


-Andrew Theurer




--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html

[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]
  Powered by Linux