Re: GFS and performace

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Pasi Kärkkäinen wrote:
On Thu, Jan 07, 2010 at 12:13:50AM +0000, Gordan Bobic wrote:
Paras pradhan wrote:
I have a GFS based shared storage cluster that connects to SAN by fibre channel. This GFS shared storage hold several virtual machines. While running hdparam from the host to a GFS share, I get following results.

--
hdparm -t /guest_vms1

/dev/mapper/test_vg1-prd_vg1_lv:
Timing buffered disk reads:  262 MB in  3.00 seconds =  87.24 MB/sec
---


Now from within the virtual machines, the I/O is low

---
hdparm -t /dev/mapper/VolGroup00-LogVol00
/dev/mapper/VolGroup00-LogVol00:
Timing buffered disk reads:   88 MB in  3.00 seconds =  29.31 MB/sec
---

I am looking for possibilities if I can increase my I/O read write within my virtual machines. Tuning GFS does help in this case?

Sorry if my question is not relevant to this list
I suspect you'll find that is pretty normal for virtualization-induced I/O penalty. Virtualization really, trully, utterly sucks when it comes to I/O performance.

My I/O performance tests (done using kernel building) show that the bottleneck was always disk I/O (including when the entire kernel source tree is pre-cached, with a 2GB of RAM guest. The _least_ horribly performing virtualization solution was VMware (tested with latest player 3.0, but verified against the latest server, too). That managed to complete the task in "only" 140% of the time the bare metal machine did (the host machine had it's memory limited to 2GB with the mem= kernel option to make sure the test was fair). So, 40% slower than bare metal.

Paravirtualized Xen was close behind, followed very closely by non-paravirtualized KVM (which was actually slower when paravirtualized drivers were used!). VirtualBox came so far behind it's not even worth mentioning.


What, you're saying VMware Server (and player) were faster than Xen PV?

I have hard time believing that.. based on my own experiences.

Yes, that is exactly what I'm saying. But the best performing virtualization solution (vmware) still had a 40% performance penalty in disk I/O compared to bare metal. But regardless of which one is least slow, they are all so slow as to only be worth considering if you are doing anything other than consolidating idle machines. The VM may feel faster in terms of boot times and such-like (the second time around when all the data is cached in the host's RAM), but it's all smoke and mirrors and doesn't stand up to scrutiny.

The only virtualization solutions that deliver on the sort of performance claims the big vendors are making are the likes of OpenVZ and VServers, but those are mostly just chroots, more like FreeBSD jails or Solaris zones with a bit of network interface virtualization thrown in than what proper virtualization.

If you don't believe me, try it yourself. Do a full kernel build with the stock RH .config file with make -j 8 on a quad core box with the 2GB of RAM VM and then on the bare metal box limited to 2GB with mem= kernel boot parameter and see how long it takes. I make it 6.5 minutes on bare metal on my 3.2GHz Core2 vs about 9.5 minutes in a VM on the same machine (vmware, paravirtualized xen and KVM come reasonably close together). Each was tested multiple times, and the results were holding consistent.

Gordan

--
Linux-cluster mailing list
Linux-cluster@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/linux-cluster


[Index of Archives]     [Corosync Cluster Engine]     [GFS]     [Linux Virtualization]     [Centos Virtualization]     [Centos]     [Linux RAID]     [Fedora Users]     [Fedora SELinux]     [Big List of Linux Books]     [Yosemite Camping]

  Powered by Linux