Just to clarify a little, there are two cases where I was evaluating performance.
1) The first case that Pranith was working involved 20-nodes with 4-processors on each node for a total of 80-processors. Each processor does its own independent i/o. These files are roughly 100-200MB each and there are several hundred of them. When I mounted the gluster system using fuse, it took 1.5-hours to do the i/o. When I mounted the same system using NFS, it took 30-minutes. Note, that in order to get the gluster mounted file-system down to 1.5-hours, I had to get rid of the replicated volume (this was done during troubleshooting with Pranith to rule out other possible issues). The timing was significantly worse (3+ hours) when I was using a replicated pair.
2) The second case was the output of a larger single file (roughly 2.5TB). For this case, it takes the gluster mounted filesystem 60-seconds (although I got that down to 52-seconds with some gluster parameter tuning). The NFS mount takes 38-seconds. I sent the results of this to the developer list first as this case is much easier to test (50-seconds versus what could be 3+ hours).
I am head out of town for a few days and will not be able to do additional testing until Monday. For the second case, I will turn off cluster.eager-lock and send the results to the email list. If there is any other testing that you would like to see for the first case, let me know and I will be happy to perform the tests and send in the results...
Sorry for the confusion...
David
------ Original Message ------
From: "Pranith Kumar Karampuri" <pkarampu@xxxxxxxxxx>
To: "Anand Avati" <avati@xxxxxxxxxxx>
Cc: "David F. Robinson" <david.robinson@xxxxxxxxxxxxx>; "Gluster Devel" <gluster-devel@xxxxxxxxxxx>
Sent: 8/6/2014 9:51:11 PM
Subject: Re: Fw: Re: Corvid gluster testing
|
_______________________________________________ Gluster-devel mailing list Gluster-devel@xxxxxxxxxxx http://supercolony.gluster.org/mailman/listinfo/gluster-devel