Re: rhcs + gfs performance issues

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hopefully the following provide some relieves ...

1. Enable lock trimming tunable. It is particularly relevant if NFS-GFS is used by development type of workloads (editing, compiling, build, etc) and/or after filesystem backup. Unlike fast statfs, this tunable is per-node base (you don't need to have the same value on each of the nodes and a mix of on-off within the same cluster is ok). Make the trimming very aggressive on backup node (> 50% where you run backup) and moderate on your active node (< 50%). Try to experiment with different values to fit the workload. Googling "gfs lock trimming wcheng" to pick up the technical background if needed.

shell> gfs_tool settune <mount_point> glock_purge <percentage>
        (e.g. gfs_tool settune /mnt/gfs1 glock_purge 50)

2. Turn on readahead tunable. It is effective for large file (stream IO) read performance. As I recalled, there was a cluster (with IPTV application) used val=256 for 400-500M files. Another one with 2G file size used val=2048. Again, it is per-node base so different values are ok on different nodes.

shell> gfs_tool settune <mount> seq_readahead <val>
        (e.g. gfs_tool settune /mnt/gfs1 seq_readahead 2048)

3. Fast statfs tunable - you have this on already ? Make sure they need to be same across cluster nodes.

4. Understand the risks and performance implications of NFS server's "async" vs. "sync" options. Linux NFS server "sync" options are controlled by two different mechanisms - mount and export. By default, mount is "aysnc" and export is "sync". Even with specific "async" mount request, Linux server uses "sync" export as default that is particularly troublesome for gfs. I don't plan to give an example and/or suggest the exact export option here - hopefully this will force folks to do more researches to fully understand the ramifications between performance and data liability. Most of the proprietary NFS servers in the market today utilize hardware features to relieve this performance and data integrity conflicts. Mainline linux servers (and RHEL) are totally software-base so it generally has problem in this regard.

Gfs1 in general doesn't do well in "sync" performance (journal layer is too bulky). Gfs2 has potentials to do better (but I'm not sure).

There are also few other things that worth mentioned but my flight is call for boarding .. I'll stop here .......

-- Wendy

--
Linux-cluster mailing list
Linux-cluster@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/linux-cluster

[Index of Archives]     [Corosync Cluster Engine]     [GFS]     [Linux Virtualization]     [Centos Virtualization]     [Centos]     [Linux RAID]     [Fedora Users]     [Fedora SELinux]     [Big List of Linux Books]     [Yosemite Camping]

  Powered by Linux