RE: GFS performance.

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Yes, and yes.  Use the "gfs_tool sb <device> proto no_lock" command on
an unmounted filesystem, and remount.  (Obviously, you cannot mount the
fs on more than one node after you do this.)

Jeff 

> -----Original Message-----
> From: linux-cluster-bounces@xxxxxxxxxx 
> [mailto:linux-cluster-bounces@xxxxxxxxxx] On Behalf Of Vikash 
> Khatuwala
> Sent: Saturday, April 25, 2009 4:26 AM
> To: linux clustering
> Subject: RE:  GFS performance.
> 
> Hi,
> 
> Can I downgrade the lock manage from lock_dlm to no_lock? Do 
> I need to un-mount the gfs partition before changing? I want 
> to see if it makes any performance improvements.
> 
> Thanks,
> Vikash.
> 
> 
> At 11:18 AM 21-04-09, Vikash Khatuwala wrote:
> >Hi,
> >
> >I am using Virtuozzo OS visualization which does not have a 
> single file 
> >for the entire VM's filesystem. All VMs are simply 
> sub-directories and 
> >OS files are stored in a common templates directory which is 
> sym linked 
> >to back to the VM's directory, so if an OS file is changed 
> inside the 
> >VM then the symlink breaks and a new file is put in the VM's private 
> >directory. I cant use GFS2 because it is not supported by Virtuozzo. 
> >All VMs are simply running web/db/ftp.
> >
> >So this basically means that there are a lot of symbolic 
> links (small 
> >files). The GFS has a block size of 4K so I also chose 4K as 
> my block 
> >size for my performance testing to asses the worst case 
> scenario. If I 
> >change the block size to 256K then the performance 
> difference between 
> >ext3 and GFS are minimal. Also when I migrate the VM out 
> from GFS(RAID5 
> >SAS 15K) to ext3(single disk SATA), there is a significant 
> noticeable 
> >performance gain!
> >
> >Below tests are on the same disk set (5 disk RAID5 SAS 15K) with 2 
> >partitions, GFS and ext3.
> >Results at 4K random reads:
> >GFS : about 1500K/s
> >ext3 : about 7000K/s
> >
> >Results at 256K random reads:
> >GFS : about 45000K/s
> >ext3 : about 50000K/s
> >
> >Results at 256K sequential reads:
> >GFS : over 110,000K/s (my single GB NIC maxes out)
> >ext3 : over 110,000K/s (my single GB NIC maxes out)
> >
> >fio test file as below only rw and blocksize were changed for the 3 
> >different scenarios above.
> >[random-read1]
> >rw=randread
> >size=10240m
> >directory=/vz/tmp
> >ioengine=libaio
> >iodepth=16
> >direct=1
> >invalidate=1
> >blocksize=4k
> >
> >[random-read2]
> >rw=randread
> >size=10240m
> >directory=/vz/tmp
> >ioengine=libaio
> >iodepth=16
> >direct=1
> >invalidate=1
> >blocksize=4k
> >
> >Thanks,
> >Vikash.
> >
> >
> >At 01:00 AM 21-04-09, Jeff Sturm wrote:
> >> > -----Original Message-----
> >> > From: linux-cluster-bounces@xxxxxxxxxx 
> >> > [mailto:linux-cluster-bounces@xxxxxxxxxx] On Behalf Of Vikash 
> >> > Khatuwala
> >> > Sent: Monday, April 20, 2009 11:23 AM
> >> > To: linux-cluster@xxxxxxxxxx
> >> > Subject:  GFS performance.
> >> >
> >> > OS : CentOS 5.2
> >> > FS : GFS
> >>
> >>Can you easily install CentOS 5.3 and GFS2?  GFS2 claims to 
> have some 
> >>performance improvements over GFS1.
> >>
> >> > Now I need to make a decision to go with GFS or not, 
> clearly at 4 
> >> > times less performance we cannot afford it, also it 
> doesn't sound 
> >> > right so would like to find out whats wrong.
> >>
> >>Be careful with benchmarks, as they often do not give you a good 
> >>indication of real-world performance.
> >>
> >>Are you more concerned with latency or throughput?  Any single read 
> >>will almost certainly take longer to complete over GFS than EXT3.  
> >>There's simply more overhead involved with any cluster filesystem.  
> >>However, that's not to say you're limited as to how many 
> reads you can 
> >>execute in parallel.  So the overall number of reads you 
> can perform 
> >>in a given time interval may not be 4x at all (are you running a 
> >>parallel
> >>benchmark?)
> >>
> >>Jeff
> >>
> >>
> >>--
> >>Linux-cluster mailing list
> >>Linux-cluster@xxxxxxxxxx
> >>https://www.redhat.com/mailman/listinfo/linux-cluster
> 
> --
> Linux-cluster mailing list
> Linux-cluster@xxxxxxxxxx
> https://www.redhat.com/mailman/listinfo/linux-cluster
> 
> 


--
Linux-cluster mailing list
Linux-cluster@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/linux-cluster

[Index of Archives]     [Corosync Cluster Engine]     [GFS]     [Linux Virtualization]     [Centos Virtualization]     [Centos]     [Linux RAID]     [Fedora Users]     [Fedora SELinux]     [Big List of Linux Books]     [Yosemite Camping]

  Powered by Linux