Re: optimising DLM speed?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



> You can set it via the configfs interface:

Given 24Gb ram, 100 filesystems, several hundred million of files and the usual user habits of trying to put 100k files in a directory:

Is 24Gb enough or should I add more memory? (96Gb is easy, beyond that is harder)

What would you consider safe maximums for these settings?

What about the following parameters?

buffer_size
dirtbl_size




--
Linux-cluster mailing list
Linux-cluster@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/linux-cluster


[Index of Archives]     [Corosync Cluster Engine]     [GFS]     [Linux Virtualization]     [Centos Virtualization]     [Centos]     [Linux RAID]     [Fedora Users]     [Fedora SELinux]     [Big List of Linux Books]     [Yosemite Camping]

  Powered by Linux