Hi, On Wed, 2011-02-16 at 19:36 +0000, Alan Brown wrote: > > A faster way to just grab lock numbers is to grep for gfs2 > in /proc/slabinfo as that will show how many are allocated at any one > time. > > True, but it doesn't show mow many are used per fs. > For the GFS2 glocks, that doesn't matter - all of the glocks are held in a single hash table no matter how many filesystems there are. The DLM however has hash tables for each lockspace (per filesystem) so it might make a difference there. > FWIW, here are current stats on each cluster node (it's evening and > lightly loaded) > > > gfs2_quotad 47 108 144 27 1 : tunables 120 60 > 8 : slabdata 4 4 0 > gfs2_rgrpd 9563 9618 184 21 1 : tunables 120 60 > 8 : slabdata 458 458 0 > gfs2_bufdata 318804 318840 96 40 1 : tunables 120 60 > 8 : slabdata 7971 7971 1 > gfs2_inode 725605 725605 800 5 1 : tunables 54 27 > 8 : slabdata 145121 145121 0 > gfs2_glock 738297 738297 424 9 1 : tunables 54 27 > 8 : slabdata 82033 82033 0 > > gfs2_quotad 94 189 144 27 1 : tunables 120 60 > 8 : slabdata 7 7 0 > gfs2_rgrpd 1658 1680 184 21 1 : tunables 120 60 > 8 : slabdata 80 80 0 > gfs2_bufdata 1065806 1067080 96 40 1 : tunables 120 60 > 8 : slabdata 26677 26677 0 > gfs2_inode 986986 1024845 800 5 1 : tunables 54 27 > 8 : slabdata 204969 204969 0 > gfs2_glock 1105575 1812825 424 9 1 : tunables 54 27 > 8 : slabdata 201425 201425 1 > > gfs2_quotad 45 108 144 27 1 : tunables 120 60 > 8 : slabdata 4 4 2 > gfs2_rgrpd 6515 6573 184 21 1 : tunables 120 60 > 8 : slabdata 313 313 0 > gfs2_bufdata 100785 101000 96 40 1 : tunables 120 60 > 8 : slabdata 2525 2525 0 > gfs2_inode 2954515 2954515 800 5 1 : tunables 54 27 > 8 : slabdata 590903 590903 0 > gfs2_glock 3332311 3639843 424 9 1 : tunables 54 27 > 8 : slabdata 404427 404427 0 > Thanks for the info. There is now a bug open (bz #678102) for increasing the default DLM hash table size, Steve. > > -- > Linux-cluster mailing list > Linux-cluster@xxxxxxxxxx > https://www.redhat.com/mailman/listinfo/linux-cluster -- Linux-cluster mailing list Linux-cluster@xxxxxxxxxx https://www.redhat.com/mailman/listinfo/linux-cluster