Re: GFS2 metadata performance

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

On Fri, 2011-03-18 at 01:42 -0400, Valeriu Mutu wrote:
> Hi,
> 
> Has anyone done any GFS2 metadata performance benchmarks? If so, what have you found? Also, what performance tuning would be recommended to increase the metadata performance of a GFS2 filesystem?
> 
> I've recently ran 'fdtree' [1] against a GFS2 filesystem as well as an ext3 filesystem. Here's what I've found:
> 
>From time to time, I run various test, but fdtree has not been among
them.

> (fdtree on a GFS2 filesystem)
> # ./fdtree.bash -l 10 -d 3 -f 4 -s 1 -o /gfs2bench/fdtree
> fdtree-1.0.2: starting at /gfs2bench/fdtree//LEVEL0.vm1.23787/
>         creating/deleting 10 directory levels with 3 directories at each level
>         for a total of 88573 directories
>         with 4 files of size 4KiB per directory
>         for a total of 354292 files and 1417168KiB
> Sun Mar 13 00:45:31 EST 2011
> Sun Mar 13 00:58:46 EST 2011
> DIRECTORY CREATE TIME IN, OUT, TOTAL = 0, 795, 795
>         Directory creates per second =  111
> Sun Mar 13 00:58:46 EST 2011
> Sun Mar 13 03:00:44 EDT 2011
> FILE CREATE TIME IN, OUT, TOTAL      = 795, 4513, 3718
>         File creates per second      =  95
>         KiB per second               =  381
> Sun Mar 13 03:00:44 EDT 2011
> Sun Mar 13 04:49:08 EDT 2011
> FILE REMOVE TIME IN, OUT, TOTAL      = 4513, 11017, 6504
>         File removals per second     =  54
> Sun Mar 13 04:49:08 EDT 2011
> Sun Mar 13 05:02:58 EDT 2011
> DIRECTORY REMOVE TIME IN, OUT, TOTAL = 11017, 11847, 830
>         Directory removals per second =  106
> 
> (fdtree on an ext3 filesystem)
> # ./fdtree.bash -l 10 -d 3 -f 4 -s 1 -o /ext3bench/fdtree
> fdtree-1.0.2: starting at /ext3bench/fdtree//LEVEL0.vm1.25896/
>         creating/deleting 10 directory levels with 3 directories at each level
>         for a total of 88573 directories
>         with 4 files of size 4KiB per directory
>         for a total of 354292 files and 1417168KiB
> Sun Mar 13 18:41:11 EDT 2011
> Sun Mar 13 18:45:48 EDT 2011
> DIRECTORY CREATE TIME IN, OUT, TOTAL = 0, 277, 277
>         Directory creates per second =  319
> Sun Mar 13 18:45:49 EDT 2011
> Sun Mar 13 19:04:33 EDT 2011
> FILE CREATE TIME IN, OUT, TOTAL      = 278, 1402, 1124
>         File creates per second      =  315
>         KiB per second               =  1260
> Sun Mar 13 19:04:33 EDT 2011
> Sun Mar 13 19:09:15 EDT 2011
> FILE REMOVE TIME IN, OUT, TOTAL      = 1402, 1684, 282
>         File removals per second     =  1256
> Sun Mar 13 19:09:15 EDT 2011
> Sun Mar 13 19:10:42 EDT 2011
> DIRECTORY REMOVE TIME IN, OUT, TOTAL = 1684, 1771, 87
>         Directory removals per second =  1018
> 
> In other words, ext3 is about 3 times faster at creating files/dirs, about 20 times faster at removing existing files, and about 10 times faster at removing existing directories. I've added the following lines to /etc/cluster/cluster.conf to remove the plock rate limit:
>  <dlm plock_ownership="1" plock_rate_limit="0"/>
>  <gfs_controld plock_rate_limit="0"/>
> but this didn't help increase the GFS2 metadata performance.
> 
These settings only affect the performance of the fcntl POSIX locks
which do not have any "on disk" representation and are processed in
userspace via gfs_controld/dlm_controld depending on which version you
are using. So that is an expected result.

> Note that I've used the same setup for the GFS2 and ext3 tests: same machine, same networking config, same storage array (which is not used by anything else).
> I also confirmed using "pingpong" [2] that I get a rate of about 4K locks/sec on this particular node against GFS2.
> 
The pingpong test does not test metadata performance.

> Does anyone have any hints/ideas as what might help increase the metadata performance of a GFS2 filesystem?
> 
> [1] https://computing.llnl.gov/?set=code&page=sio_downloads
> [2] http://wiki.samba.org/index.php/Ping_pong
> 
> Best,
There are a number of variables which you don't mention, but which are
important for the test results. Firstly, what kind of storage are you
using? Secondly, was this lock_dlm or lock_nolock? Also was there any
memory pressure while the tests were running? Was noatime set on the
filesystem (or indeed, other mount options)?

Steve.


--
Linux-cluster mailing list
Linux-cluster@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/linux-cluster


[Index of Archives]     [Corosync Cluster Engine]     [GFS]     [Linux Virtualization]     [Centos Virtualization]     [Centos]     [Linux RAID]     [Fedora Users]     [Fedora SELinux]     [Big List of Linux Books]     [Yosemite Camping]

  Powered by Linux