Hi all, I have an question and probably some advices about gfs
relating to this performans issue. We have use DDN SA6620 system for storage.
This has 60 sas disks in it. This device capable of making 4data+2parity or
8data+2parity disks for raid 6 arrays. We have also 2 TB sas disks. We have san
with 2 san switch and 4 hp dl 585 G2 servers. In this 8+2 configurations we have 6 raid6 arrays. we
have total 120TB raw disk size. But we have divide disks into 6 disk pool and 4
vdisk per pool each vdisk size is 3646 GB. Also we create 4 Lun and each LUN
has 6 vdisks. Finally we have 4 LUN which has I/O on all 60 disks of ddn sa6620.we
have just tried to manage having all disks make i/o for all servers.
And Pool, vdisk, LUN config are as follows
First we have deployed, GFS2 for 4 DL585 servers. Make
standalone “dd” test both in serial and parallel from different
servers. In serial tests we have measured 70GB/s ~ 96GB/s. after noatime option
we have 100GB/s and 140 GB/s io results for writing. In parallel it is getting
much worse. Secondly we have formatted LUN with GFS instead of GFS2. We
get 500GB/s for one server at a time and 450 GB/s 4 node i/o tests In this I have agree with for Corey CKOVACS for tuning on
storage would be important. But comparing gfs with gfs2 formatting option is
very interesting. Because gfs seems faster than gfs2. I didn’t expect
this result. Is it normal? Another important result for gfs or gfs2 journal numbers.
If your gfs volume journal number is higher than number of servers for future
use. It affects gfs performance very dramaticly.it is better to add journals later
while you need it. Regards Aydin SASMAZ -----Özgün İleti----- Hi, On Thu, 2010-03-04 at 09:13 -0600, Doug Tucker wrote: > Steven, > > We discovered the same issue the day we went into
production with ours. > The tuning paramater that made it production ready
for us was: > > /sbin/gfs_tool settune /mnt/users statfs_fast 1 > > Why statfs_fast is not set to on by default is
beyond my comprehension, > I don't think anyone could run production without it
on. Anyway, you > have to set that for every mount point on the
cluster, and it has to be > set on all nodes. We just created an init script
that runs on startup > after all the cluster services are started. > I suspect that is historical so that we don't surprise
people who've not been used to that feature when they upgrade their
kernels. In GFS2 it defaults to fast. I'm also trying (gradually) to ensure that there is a way
to set all parameters via the mount command line in GFS2 and
therefore to avoid having to run special programs after mount to set such
parameters. We are not there yet, but we are pretty close now I think, Steve. -- Linux-cluster mailing list Linux-cluster@xxxxxxxxxx https://www.redhat.com/mailman/listinfo/linux-cluster |
-- Linux-cluster mailing list Linux-cluster@xxxxxxxxxx https://www.redhat.com/mailman/listinfo/linux-cluster