concurrent write performance

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi everyone,

I've been doing some tests with a clustered GFS installation that will
evetually host an application that will make heavy use of concurrent
writes across nodes.

Testing such a scenarios with a script designed to simulate multiple
writers shows that add I add writer processes across nodes,
performance drops off.  This makes some sense to me, as the nodes need
to do more complicated neogtiation of locking.

Two questions:

1) What is the expected scalability of GFS for many writer nodes as
the number of nodes increases?

2) What kinds of things can I do to increase random write performance
on GFS?  I'm even interested in things that cause some trade-off with
read performance.

I've got the filesystem mounted on all nodes with noatime,quota=off.

My filesystem isn't large enough to benefit from reducing the number
of resource groups.

It looks like drop_count for the dlm isn't there anymore.  I looked
at /sys/kernel/config/dlm/cluster - what do the various items in there
tune, and which can I try to mess with to help write performance?

Finally, I don't see any sign of statfs_slots in the current gfs2_tool
gettune output.  Is there an equivalent I can muck with?

-- 
Ross Vandegrift
ross@xxxxxxxxxxx

"The good Christian should beware of mathematicians, and all those who
make empty prophecies. The danger already exists that the mathematicians
have made a covenant with the devil to darken the spirit and to confine
man in the bonds of Hell."
	--St. Augustine, De Genesi ad Litteram, Book II, xviii, 37

--
Linux-cluster mailing list
Linux-cluster@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/linux-cluster

[Index of Archives]     [Corosync Cluster Engine]     [GFS]     [Linux Virtualization]     [Centos Virtualization]     [Centos]     [Linux RAID]     [Fedora Users]     [Fedora SELinux]     [Big List of Linux Books]     [Yosemite Camping]

  Powered by Linux