Degrading Network performance as KVM/kernel version increases

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



I have been getting degrading network performance with newer versions of
KVM and was wondering if this was expected?  It seems like a bug, but I
am new to this and maybe I am doing something wrong so I thought I would
ask.

KVM Host OS: Fedora 12 x86_64
KVM Guest OS Tiny Core Linux 2.6.33.3 kernel

I have tried multiple host kernels 2.6.31.5, 2.6.31.6, 2.6.32.19 and
2.6.35.4 along with versions qemu-kvm 11.0 and qemu-system-x86_64 12.5
compiled from from qemu-kvm repo.

Setup is: 2 hosts with 1 guest on each connected by 10 Gb nic.

I am using virtio and have checked that hardware acceleration is
working.

Processor usage is less than 50% on host and guests. 

Here is what I am seeing, I will just include guest to guest statistics,
I do have more (host to guest, etc.) if interested:

With kernel 2.6.31.5 and usign qemu-kvm 11.0  1.57 Gb/s (guest 1 to
guest 2)  then 1.37 Gb/s (guest 2 to guest 1) with a single iperf
thread.
With kernel 2.6.31.5 and usign qemu-kvm 11.0  3.16 Gb/s (guest 1 to
guest 2)  then 4.29 Gb/s (guest 2 to guest 1) with 4 (P4) iperf threads.

With kernel 2.6.31.5 and usign qemu-system 12.5  1.02 Gb/s (guest 1 to
guest 2) then .420 Gb/s (guest 2 to guest 1) with a single iperf thread.
With kernel 2.6.31.5 and usign qemu-system 12.5  1.30 Gb/s (guest 1 to
guest 2)  then .655 Gb/s (guest 2 to guest 1) with 4 (P4) iperf threads.

With kernel 2.6.31.5 on host 1 and 2.6.32.19 on host 2 and usign
qemu-kvm 11.0  .580 Gb/s (guest 1 to guest 2)  then 1.32 Gb/s(guest 2 to
guest 1) with a single iperf thread.

With kernel 2.6.32.19 and usign qemu-kvm 11.0  .548 Gb/s (guest 1 to
guest 2) then .603 Gb/s (guest 2 to guest 1) with a single iperf thread.
With kernel 2.6.32.19 and usign qemu-kvm 11.0  .569 Gb/s (guest 1 to
guest 2)  then .478 Gb/s (guest 2 to guest 1) with 4 (P4) iperf threads.

With kernel 2.6.32.19 and usign qemu-system 12.5  .571 Gb/s (guest 1 to
guest 2) then .500 Gb/s (guest 2 to guest 1) with a single iperf thread.
With kernel 2.6.32.19 and usign qemu-system 12.5  .633 Gb/s (guest 1 to
guest 2)  then .705 Gb/s (guest 2 to guest 1) with 4 (P4) iperf threads.

With kernel 2.6.35.4 and usign qemu-system 12.5  .418 Gb/s (guest 1 to
guest 2) and then I gave up.


My goal is to get as much bandwidth as I can between the 2 guests
running on separate hosts.  The most I have been able to get is ~4 Gb/s
running 4 threads on iperf from guest A to guest B.  I cannot seem to
get much over 1.5Gb/s from guest to guest with a single iperf thread.
Is there some sort of know send limit per thread?  Is it expected that
the latest version of the kernel and modules perform worse than earlier
versions in the area of network performance ( I am guessing not, am I
doing something wrong?)?  I am using virtio and have checked that
hardware acceleration is working.  4 iperf threads host to host yields
~9.5 Gb/s.  Any ideas on how I can get better performance with newer
versions?  I have tried using vhost in 2.6.35 but I get the vhost could
not be initialized error.  The only thing I could find on the vhost
error is that selinux should be off which it is.

I am looking for ideas on increasing the bandwidth between guests and
thoughts on the degrading performance.

Thanks for your help! --Matt
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]
  Powered by Linux