Re: KVM incompatible with multipath?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Whoops, please disregard the previous mail; I hit ctrl-enter while still composing, sorry.

Anthony Liguori wrote:
> You need to create a partition table and setup grub in order to be able to use something as -hda. You don't get that automatically with debootstrap.

Although I didn't include that in my mail, I did configure partitions and debootstrapped Lenny on part1. But that's not what's important anymore, so far I'm unable to reproduce the data corruption. Probably some glitch in the matrix.

This post kind of grew into a performance test of virtio block and net for Xen 3.2.1 and KVM 87, specifically when using multipath IO to an Equallogic iSCSI box.

I added Xen 3.2.1 to the same box, installed a domU and ran some performance tests. Afterwards I retried KVM and I didn't experience the problems I had before. I boot the same multipathed disk that I used for Xen with 2.6.27.25 (+ kvm-87 modules in initrd). You can find the script below. It actually boots now and I have no problems whatsoever. Note that this kernel I built works for xen domU, Linux native and for KVM guest.

I've ran some bonnie++ tests, see below for the results.

At first I had thought that using iSCSI and multipath on the host, whether KVM or Xen, would be the fastest. So I ran a bunch of tests against that. Whats interesting in these results is that KVM guest has much lower sequential block output than on the host kernel, but much better sequential input. The latter is probably due to caching and buffers in the KVM host kernel. Setting cache=writeback improves both, but still block output is ~75MB/sec slower than on the host. It seems that KVM guest write performance is CPU limited. Any advice on how to get better write speeds is highly appreciated.


Afterwards, I decided to try multipath+iscsi in the guest. It turns out that Xen is the big winner in the end, but with a catch. The highest performance was measured when using iscsi+multipath in the Xen domU with jumbo frames. Thus the rest of the results with the disk being mapped in the host aren't relevant at this point.

Iscsi multipath to an Equallogic box with 3x1gbit, inside a xen domU (e.g. domU gets 3 nics, each bridged to one of the nics on dom0):

------Sequential Output------ --Sequential Input- --Random-
-Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP  /sec %CP
64011  97 196484  49 91937  33 54583  81 160031  33 531.6   0

KVM guest with iscsi multipath, same bridging setup with 3 tap devices but *NO* jumbo frames:
------Sequential Output------ --Sequential Input- --Random-
-Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP  /sec %CP
48326  72 104770  39 57539  26 42610  79 123867  43 547.5   1


This is pretty good performance for Xen, and at 1500 byte MTU it's definitely not bad for KVM either. The catch, however, is that I was unable to turn on Jumbo frames in the KVM guest. I've applied the patch in https://bugzilla.redhat.com/show_bug.cgi?id=473114 to the guest kernel and changed some 4096 byte long buffers in vl.c and net.c to 16384, but I still couldn't configure the MTU on the guest to anything above 1500:

test:~# ifconfig eth3 mtu 1500
test:~# ifconfig eth3 mtu 1501
SIOCSIFMTU: Invalid argument

So my question is, how can I get virtio to play nice and accept a 9000 byte MTU? I can then finish my performance comparison and perhaps extend it with iometer.


Thanks!

Regards, infernix





bonnie++ output with Xen domU (kernel 2.6.27.25) which has the multipathed disk configured as root disk xvda1:

------Sequential Output------ --Sequential Input- --Random-
-Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP  /sec %CP
77918  98 115400  18 56323   8 60034  69 122466   7 422.4   0

Here's bonnie++ output with KVM guest (kernel 2.6.27.25) using virtio disk vda1 mapped to /dev/mapper/multipath disk:

------Sequential Output------ --Sequential Input- --Random-
-Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP  /sec %CP
59461  95 97238  17 60831  12 62181  95 205094  25 656.4   2

Here's host performance with Xen dom0 (kernel 2.6.26-2-amd64 from lenny) directly on the /dev/mapper/multipath disk:

------Sequential Output------ --Sequential Input- --Random-
-Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP  /sec %CP
66519  93 172569  50 85457  35 59671  86 164754  40 451.8   0

And here's native performance with 2.6.27.25 (no xen) directly on the /dev/mapper/multipath disk:

------Sequential Output------ --Sequential Input- --Random-
-Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP  /sec %CP
71796  98 182818  53 85511  29 61484  79 165302  31 668.4   1

The above tests were performed with blockdev --setra 16384 and MTU 1500.


Here's native Linux host with jumbo frames doing direct io on /dev/mapper/multipath disk:

------Sequential Output------ --Sequential Input- --Random-
-Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP  /sec %CP
73616  99 195577  48 96794  27 68845  84 201899  29 630.3   1

Here's KVM guest performance (with jumbo frames in the hosts iscsi interfaces), cache=writethrough with the multipath disk as /dev/vda1:

------Sequential Output------ --Sequential Input- --Random-
-Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP  /sec %CP
60814  96 93222  15 64166  12 58015  94 258557  31 649.3   2

Here's KVM guest performance (with jumbo frames in the hosts iscsi interfaces), cache=writeback and bonnie size=2.5*host RAM with the multipath disk as /dev/vda1:

------Sequential Output------ --Sequential Input- --Random-
-Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP  /sec %CP
52627  95 120630  23 100333  22 61271  94 284889  37 464.6   2

Xen domU (kernel 2.6.27.25) which has the multipathed disk configured as root disk xvda1 + jumbo frames in dom0:
------Sequential Output------ --Sequential Input- --Random-
-Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP  /sec %CP
76316  97 116028  19 58278   9 60066  71 131953   9 282.8   0



KVM guest script:

#!/bin/sh
( sleep 2s; for i in 0 4 5 6; do brctl addif br$i tap$i; ifconfig tap$i 0.0.0.0 up promisc; ifconfig tap$i mtu 9000; done ) &
/usr/local/kvm/bin/qemu-system-x86_64 -m 2048 -localtime -curses \
-net nic,model=virtio,vlan=0 -net tap,vlan=0,ifname=tap0,script=/bin/true \
-net nic,model=virtio,vlan=4,macaddr=00:16:42:51:34:a0 -net tap,vlan=4,ifname=tap4,script=/bin/true \ -net nic,model=virtio,vlan=5,macaddr=00:16:42:51:34:a1 -net tap,vlan=5,ifname=tap5,script=/bin/true \ -net nic,model=virtio,vlan=6,macaddr=00:16:42:51:34:a2 -net tap,vlan=6,ifname=tap6,script=/bin/true \
-initrd  /boot/initrd.img-2.6.27.25-001.core2 \
-kernel /boot/vmlinuz-2.6.27.25-001.core2 \
-append 'root=/dev/vda1' \
-drive file=/dev/mapper/36090a0383049a2ac41a4643f000070c2,if=virtio,boot=on,cache=writeback


--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html

[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]
  Powered by Linux