Re: optimising DLM speed?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

On Wed, 2011-02-16 at 14:12 +0000, Alan Brown wrote:
> > You can set it via the configfs interface:
> 
> Given 24Gb ram, 100 filesystems, several hundred million of files and 
> the usual user habits of trying to put 100k files in a directory:
> 
> Is 24Gb enough or should I add more memory? (96Gb is easy, beyond that 
> is harder)
> 
The more memory you add, the greater the potential for caching large
numbers of inodes, which in turn implies larger numbers of dlm locks.

So you are much more likely to see these issues with large ram sizes. If
you can easily do 96G, then I'd say start with that.

> What would you consider safe maximums for these settings?
> 
That is a more tricky question. There might be some issues if you go
above 2^16 hash buckets due to the way in which dlm organises its hash
buckets. Dave Teigland can give you more info on that.

> What about the following parameters?
> 
> buffer_size
I doubt that this will need adjusting.

> dirtbl_size
That might need adjusting too, although it didn't appear to be
significant on the profile results,

Steve.

> 
> 
> 
> 
> --
> Linux-cluster mailing list
> Linux-cluster@xxxxxxxxxx
> https://www.redhat.com/mailman/listinfo/linux-cluster


--
Linux-cluster mailing list
Linux-cluster@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/linux-cluster


[Index of Archives]     [Corosync Cluster Engine]     [GFS]     [Linux Virtualization]     [Centos Virtualization]     [Centos]     [Linux RAID]     [Fedora Users]     [Fedora SELinux]     [Big List of Linux Books]     [Yosemite Camping]

  Powered by Linux