YNT: GFS Tuning - it's just slow, to slow for production

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi all,

 

I have an question and probably some advices about gfs relating to this performans issue. We have use DDN SA6620 system for storage. This has 60 sas disks in it. This device capable of making 4data+2parity or 8data+2parity disks for raid 6 arrays. We have also 2 TB sas disks. We have san with 2 san switch and 4 hp dl 585 G2 servers.

 

In this 8+2 configurations we have 6 raid6 arrays. we have total 120TB raw disk size. But we have divide disks into 6 disk pool and 4 vdisk per pool each vdisk size is 3646 GB. Also we create 4 Lun and each LUN has 6 vdisks. Finally we have 4 LUN which has I/O on all 60 disks of ddn sa6620.we have just tried to manage having all disks make i/o for all servers.

 

Raw Disk
size TB

Number of
raid set pool

Number of
raid6 disks

1 disk net
size GB

raid6-set
size net GB

number of vdisk
per disk pool

Number of
Total vdisks

vdisk size GB

Number of
vdisk for lun

lun size GB

Number of Lun created

Net Usage
for 4 lun

120

6

8

1823

14584

4

24

3646

6

21876

4

87504

 

And Pool, vdisk, LUN config are as follows

 

vd-0

vd-4

vd-8

vd-12

vd-16

vd-20

LVM Lun1

vd-1

vd-5

vd-9

vd-13

vd-17

vd-21

LVM Lun2

vd-2

vd-6

vd-10

vd-14

vd-18

vd-22

LVM Lun3

vd-3

vd-7

vd-11

vd-15

vd-19

vd-23

LVM Lun4

 

 

 

 

 

 

 

pool1

pool2

pool3

pool4

pool5

pool6

 

 

 

 

First we have deployed, GFS2 for 4 DL585 servers. Make standalone “dd” test both in serial and parallel from different servers. In serial tests we have measured 70GB/s ~ 96GB/s. after noatime option we have 100GB/s and 140 GB/s io results for writing. In parallel it is getting much worse.

 

Secondly we have formatted LUN with GFS instead of GFS2. We get 500GB/s for one server at a time and 450 GB/s 4 node i/o tests

 

In this I have agree with for Corey CKOVACS for tuning on storage would be important. But comparing gfs with gfs2 formatting option is very interesting. Because gfs seems faster than gfs2. I didn’t expect this result. Is it normal?

 

Another important result for gfs or gfs2 journal numbers. If your gfs volume journal number is higher than number of servers for future use. It affects gfs performance very dramaticly.it is better to add journals later while you need it.

 

 

Regards

 

Aydin SASMAZ

 

 

 

-----Özgün İleti-----
Kimden: linux-cluster-bounces@xxxxxxxxxx [mailto:linux-cluster-bounces@xxxxxxxxxx] Yerine Steven Whitehouse
Tarih: Thursday, March 04, 2010 5:46 PM
Kime: linux clustering
Konu: Re: GFS Tuning - it's just slow, to slow for production

 

Hi,

 

On Thu, 2010-03-04 at 09:13 -0600, Doug Tucker wrote:

> Steven,

>

> We discovered the same issue the day we went into production with ours.

> The tuning paramater that made it production ready for us was:

>

> /sbin/gfs_tool settune /mnt/users statfs_fast 1

>

> Why statfs_fast is not set to on by default is beyond my comprehension,

> I don't think anyone could run production without it on.  Anyway, you

> have to set that for every mount point on the cluster, and it has to be

> set on all nodes.  We just created an init script that runs on startup

> after all the cluster services are started.

>

I suspect that is historical so that we don't surprise people who've not

been used to that feature when they upgrade their kernels. In GFS2 it

defaults to fast.

 

I'm also trying (gradually) to ensure that there is a way to set all

parameters via the mount command line in GFS2 and therefore to avoid

having to run special programs after mount to set such parameters. We

are not there yet, but we are pretty close now I think,

 

Steve.

 

 

--

Linux-cluster mailing list

Linux-cluster@xxxxxxxxxx

https://www.redhat.com/mailman/listinfo/linux-cluster

--
Linux-cluster mailing list
Linux-cluster@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/linux-cluster

[Index of Archives]     [Corosync Cluster Engine]     [GFS]     [Linux Virtualization]     [Centos Virtualization]     [Centos]     [Linux RAID]     [Fedora Users]     [Fedora SELinux]     [Big List of Linux Books]     [Yosemite Camping]

  Powered by Linux