Re: stress-testing GFS ?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



I have recently ran some very simple iozone tests on GFS (and OCFS2) and got somewhat disappointing results. I am attaching the spreadsheet.

The first test was to measure single-node performance with ext3, GFS and OCFS2 partition that I mounted in a single node. The second was to use two nodes and run iozone in parallel (by hand, i.e. without -m/-t options).

Single node performances were comparable in terms of wallclock time, although the benchmark values for ext3 were clearly better (so I am not sure I understand why wallclock times are so close). 2-node numbers show substantial performance degradation.

Note, I didn't do any tuning, mostly because I didn't find much documentation on the subject (except that for OCFS2 I set cluster size to 1MB, which helped). The nodes were running FC4 with the disk connected to nodes via Emulex HBA. and cluster tools 1.01
                                             
I'd be very interested to hear comments on the numbers and hopefully some tuning suggestions.


Thanks.
-boris


Date: Wed, 15 Mar 2006 14:20:28 -0800
From: Michael Will < mwill@xxxxxxxxxxxxxxxxxxxx>
Subject: Re: stress-testing GFS ?
To: linux clustering <linux-cluster@xxxxxxxxxx>
Message-ID: < 4418932C.9080001@xxxxxxxxxxxxxxxxxxxxxxxxx>
Content-Type: text/plain; charset=ISO-8859-1; format=flowed

iozone does test for a lot of different access patterns, and can create
nice spreadsheets including graphs
from the point of view of a single node. It also has a multiple node
flag for running it across a cluster. See -+m and -t
options. It knows how to use 'rsh' and can also be configured for any
other remote execution command by setting the
enviroment variable RSH to say ssh or bpsh.

Don't forget to post your benchmark results to this mailinglist ;-)

Michael

Birger Wathne wrote:

> I would like to put my cluster through a little controlled hell before
> declaring it ready for production.
>
> Is there any kind of stress-test/verification procedure to 'certify'
> shared storage with GFS?
> Ideally there would be some distributed software that could be run in
> a cluster to check that the shared storage behaves as expected under
> all kinds of load. Throughput, concurrent writing, GFS locking, file
> system locking, etc...
> Something that could interface with GFS internals to see that
> everything was 'right' at every step.
>
> Since I have seen nothing about the issue, I assume something like
> that doesn't exist, so... Any ideas on how to stress test GFS?
> Homegrown scripts? Known problems with hardware that a test should
> look for?


						random	random	bkwd	record	stride
		write	rewrite	read	reread	read	write	read	rewrite	read	fwrite	frewrite	fread	freread


ext3		12.5min	113718	8335	91962	186143	4345	515	9612	258904	6002	112859	7230	76225	139576
gfs		13.5min	27217	8337	50117	62312	1611	604	8233	81180	5749	33633	7958	53301	40331
ocfs2		14.5min	42102	9345	65887	92481	1210	566	8136	155370	5605	41571	8699	78925	71724

gfs(n1)		46min	21467	5159	29705	35512	348	172	808	81188	4970	32680	8039	35667	58961
gfs(n2)		48min	40046	3493	29565	25093	504	327	906	81390	456	30035	4085	24953	22493

ocfs2 (n1)	38min	26813	4375	27406	27408	367	251	892	156194	5038	49998	8882	80288	111914
ocfs2 (n2)	35.5min	22756	5330	36728	29607	673	400	907	153949	953	45964	5117	34055	40158
--

Linux-cluster@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/linux-cluster

[Index of Archives]     [Corosync Cluster Engine]     [GFS]     [Linux Virtualization]     [Centos Virtualization]     [Centos]     [Linux RAID]     [Fedora Users]     [Fedora SELinux]     [Big List of Linux Books]     [Yosemite Camping]

  Powered by Linux