Re: Degrading Network performance as KVM/kernel version increases

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



 On 8/31/2010 6:00 PM, matthew.r.rohrer@xxxxxxxxxx wrote:
I have been getting degrading network performance with newer versions of
KVM and was wondering if this was expected?  It seems like a bug, but I
am new to this and maybe I am doing something wrong so I thought I would
ask.

KVM Host OS: Fedora 12 x86_64
KVM Guest OS Tiny Core Linux 2.6.33.3 kernel

I have tried multiple host kernels 2.6.31.5, 2.6.31.6, 2.6.32.19 and
2.6.35.4 along with versions qemu-kvm 11.0 and qemu-system-x86_64 12.5
compiled from from qemu-kvm repo.


I can't say anything about the kernel version making things worse. At least for the qemu-kvm version, you should be using -device and -netdev instead of -net nic -net tap (see *http://git.qemu.org/qemu.git/tree/docs/qdev-device-use.txt since it's not in the 0.12 tree).*


Setup is: 2 hosts with 1 guest on each connected by 10 Gb nic.

I am using virtio and have checked that hardware acceleration is
working.

Processor usage is less than 50% on host and guests.

Here is what I am seeing, I will just include guest to guest statistics,
I do have more (host to guest, etc.) if interested:

<snip results>


My goal is to get as much bandwidth as I can between the 2 guests
running on separate hosts.  The most I have been able to get is ~4 Gb/s
running 4 threads on iperf from guest A to guest B.  I cannot seem to
get much over 1.5Gb/s from guest to guest with a single iperf thread.
Is there some sort of know send limit per thread?  Is it expected that
the latest version of the kernel and modules perform worse than earlier
versions in the area of network performance ( I am guessing not, am I
doing something wrong?)?  I am using virtio and have checked that
hardware acceleration is working.  4 iperf threads host to host yields
~9.5 Gb/s.  Any ideas on how I can get better performance with newer
versions?  I have tried using vhost in 2.6.35 but I get the vhost could
not be initialized error.  The only thing I could find on the vhost
error is that selinux should be off which it is.

I am looking for ideas on increasing the bandwidth between guests and
thoughts on the degrading performance.


Vhost-net is probably your best bet for maximizing throughput. You might try a separate post just for the vhost error if nobody chimes in about it here.


Thanks for your help! --Matt

--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]
  Powered by Linux