Re: GFS and performace

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thu, Jan 07, 2010 at 12:13:50AM +0000, Gordan Bobic wrote:
> Paras pradhan wrote:
> >I have a GFS based shared storage cluster that connects to SAN by fibre 
> >channel. This GFS shared storage hold several virtual machines. While 
> >running hdparam from the host to a GFS share, I get following results.
> >
> >--
> >hdparm -t /guest_vms1
> >
> >/dev/mapper/test_vg1-prd_vg1_lv:
> >Timing buffered disk reads:  262 MB in  3.00 seconds =  87.24 MB/sec
> >---
> >
> >
> >Now from within the virtual machines, the I/O is low
> >
> >---
> >hdparm -t /dev/mapper/VolGroup00-LogVol00 
> >
> >/dev/mapper/VolGroup00-LogVol00:
> > Timing buffered disk reads:   88 MB in  3.00 seconds =  29.31 MB/sec
> >---
> >
> >I am looking for possibilities if I can increase my I/O read write 
> >within my virtual machines. Tuning GFS does help in this case?
> >
> >Sorry if my question is not relevant to this list
> 
> I suspect you'll find that is pretty normal for virtualization-induced 
> I/O penalty. Virtualization really, trully, utterly sucks when it comes 
> to I/O performance.
> 
> My I/O performance tests (done using kernel building) show that the 
> bottleneck was always disk I/O (including when the entire kernel source 
> tree is pre-cached, with a 2GB of RAM guest. The _least_ horribly 
> performing virtualization solution was VMware (tested with latest player 
> 3.0, but verified against the latest server, too). That managed to 
> complete the task in "only" 140% of the time the bare metal machine did 
> (the host machine had it's memory limited to 2GB with the mem= kernel 
> option to make sure the test was fair). So, 40% slower than bare metal.
> 
> Paravirtualized Xen was close behind, followed very closely by 
> non-paravirtualized KVM (which was actually slower when paravirtualized 
> drivers were used!). VirtualBox came so far behind it's not even worth 
> mentioning.
> 

What, you're saying VMware Server (and player) were faster than Xen PV?

I have hard time believing that.. based on my own experiences.

-- Pasi

--
Linux-cluster mailing list
Linux-cluster@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/linux-cluster

[Index of Archives]     [Corosync Cluster Engine]     [GFS]     [Linux Virtualization]     [Centos Virtualization]     [Centos]     [Linux RAID]     [Fedora Users]     [Fedora SELinux]     [Big List of Linux Books]     [Yosemite Camping]

  Powered by Linux