On Mon, Jun 16, 2008 at 11:45:51AM -0500, Terry wrote: > I have 4 GFS volumes, each 4 TB. I am seeing pretty high load > averages on the host that is serving these volumes out via NFS. I > notice that gfs_scand, dlm_recv, and dlm_scand are running with high > CPU%. I truly believe the box is I/O bound due to high awaits but > trying to dig into root cause. 99% of the activity on these volumes > is write. The number of files is around 15 million per TB. Given > the high number of writes, increasing scand_secs will not help. Any > other optimizations I can do? Are you running multi-threaded/multi-process writes to the same files on various nodes? During benchmarking and testing a cluster I recently built, I noticed a very large performance hit when performing multi-threaded I/O to overlapping areas of the filesystem. If you can randomize the order that different nodes are accessing the filesystem, you'll go a long way to reducing contention. That will improve your performance. However, I suspect with NFS you won't have too much choice, since file access will be governed by client read/write patterns... -- Ross Vandegrift ross@xxxxxxxxxxx "The good Christian should beware of mathematicians, and all those who make empty prophecies. The danger already exists that the mathematicians have made a covenant with the devil to darken the spirit and to confine man in the bonds of Hell." --St. Augustine, De Genesi ad Litteram, Book II, xviii, 37 -- Linux-cluster mailing list Linux-cluster@xxxxxxxxxx https://www.redhat.com/mailman/listinfo/linux-cluster