Re: GFS2 tuning recommendations on RHEL 5.3

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

On Tue, 2009-01-13 at 13:40 -0500, Michael Hayes wrote:
> I have a customer who is currently looking at standing up 4 8 node RHCS clusters each of which will have a 8TB GFS2 file system.  RHEL 5.3 32bit; vmware host virtualization; fence_vmware_vi.py fencing scripts.  The cluster and fencing are all working.
> 
> I am looking to get some GFS2 tuning recommendations for these file systems.  They will contain directory structures and files similar to the following configurations; the following are rough estimates from the application vendor.  Currently we are looking at GFS2 partitions set up with the default settings; default i386 block size, 8 journals.
> 
It looks like you won't really need to do a lot of tuning, it should be
ok on defaults. The only issue is how often the various processes
running on different nodes try to access the same data files. Provided
its not too often, then everything should be fine,

Steve.

> Chiliad Raw Data (html/xml) files:
> Estimate # of Directories up to 100k
> Typical FS Layout: /datastore/chiliad/extract/<form>/<year>/<month>/<day of month>/<bin>/*.html
> Number of files in a directory: min=1,000, max=10,000, avg=1,000
> File size: min=1KB, max=2MB, avg=5KB
> Directory depth: min=1, max=25, avg=5
> 
> Chiliad Index files:
> Estimate # of Directories: Thousands
> Number of files in a directory: min=5, max=30, avg=15-20
> File size: min=1KB, max=2GB, avg=1-2GB
> Directory depth: min=1, max=10, avg=5
> 
> XXXXX:(/root)# gfs2_tool gettune /datastore/
> new_files_directio = 0
> new_files_jdata = 0
> quota_scale = 1.0000   (1, 1)
> quotad_secs = 5
> logd_secs = 1
> recoverd_secs = 60
> statfs_quantum = 30
> stall_secs = 600
> quota_cache_secs = 300
> quota_simul_sync = 64
> statfs_slow = 0
> reclaim_limit = 5000
> complain_secs = 10
> max_readahead = 262144
> atime_quantum = 3600
> quota_quantum = 60
> quota_warn_period = 10
> jindex_refresh_secs = 60
> log_flush_secs = 60
> incore_log_blocks = 1024
> demote_secs = 300
> 
> Thank you,
> 
> Michael Hayes 
> Red Hat 
> mhayes@xxxxxxxxxx 
> 
> 
> --
> Linux-cluster mailing list
> Linux-cluster@xxxxxxxxxx
> https://www.redhat.com/mailman/listinfo/linux-cluster

--
Linux-cluster mailing list
Linux-cluster@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/linux-cluster

[Index of Archives]     [Corosync Cluster Engine]     [GFS]     [Linux Virtualization]     [Centos Virtualization]     [Centos]     [Linux RAID]     [Fedora Users]     [Fedora SELinux]     [Big List of Linux Books]     [Yosemite Camping]

  Powered by Linux