Re: optimising DLM speed?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Wed, Feb 16, 2011 at 02:12:30PM +0000, Alan Brown wrote:
> > You can set it via the configfs interface:
> 
> Given 24Gb ram, 100 filesystems, several hundred million of files
> and the usual user habits of trying to put 100k files in a
> directory:
> 
> Is 24Gb enough or should I add more memory? (96Gb is easy, beyond
> that is harder)
> 
> What would you consider safe maximums for these settings?
> 
> What about the following parameters?
> 
> buffer_size
> dirtbl_size

Don't change the buffer size, but I'd increase all the hash table sizes to
4096 and see if anything changes.

echo "4096" > /sys/kernel/config/dlm/cluster/rsbtbl_size
echo "4096" > /sys/kernel/config/dlm/cluster/lkbtbl_size
echo "4096" > /sys/kernel/config/dlm/cluster/dirtbl_size

(Before gfs file systems are mounted as Steve mentioned.)

Dave

--
Linux-cluster mailing list
Linux-cluster@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/linux-cluster


[Index of Archives]     [Corosync Cluster Engine]     [GFS]     [Linux Virtualization]     [Centos Virtualization]     [Centos]     [Linux RAID]     [Fedora Users]     [Fedora SELinux]     [Big List of Linux Books]     [Yosemite Camping]

  Powered by Linux