My apologies. I did some additional testing and realized that my timing
wasn't right. I believe that after I do the write, NFS caches the data
and until I close and flush the file, the timing isn't correct.
I believe the appropriate timing is now 38-seconds for NFS and
60-seconds for gluster. I played around with some of the parameters and
got it down to 52-seconds with gluster by setting:
performance.write-behind-window-size: 128MB
performance.cache-size: 128MB
I couldn't get it closer to the NFS timing on the writes, although the
read speads were slightly better than NFS. I am not sure if this is
reasonable, or if I should be able to get write speeds that are more
comparable to the NFS mount...
Sorry for the confusion I might have caused with my first email... It
isn't 25x slower. It is roughly 30% slower for the writes...
David
------ Original Message ------
From: "Vijay Bellur" <vbellur@xxxxxxxxxx>
To: "David F. Robinson" <david.robinson@xxxxxxxxxxxxx>;
gluster-devel@xxxxxxxxxxx
Sent: 8/6/2014 12:48:09 PM
Subject: Re: Fw: Re: Corvid gluster testing
On 08/06/2014 12:11 AM, David F. Robinson wrote:
I have been testing some of the fixes that Pranith incorporated into
the
3.5.2-beta to see how they performed for moderate levels of i/o. All
of
the stability issues that I had seen in previous versions seem to have
been fixed in 3.5.2; however, there still seem to be some significant
performance issues. Pranith suggested that I send this to the
gluster-devel email list, so here goes:
I am running an MPI job that saves a restart file to the gluster file
system. When I use the following in my fstab to mount the gluster
volume, the i/o time for the 2.5GB file is roughly 45-seconds.
/ gfsib01a.corvidtec.com:/homegfs /homegfs glusterfs
transport=tcp,_netdev 0 0
/
When I switch this to use the NFS protocol (see below), the i/o time
is
2.5-seconds.
/ gfsib01a.corvidtec.com:/homegfs /homegfs nfs
vers=3,intr,bg,rsize=32768,wsize=32768 0 0/
The read-times for gluster are 10-20% faster than NFS, but the write
times are almost 20x slower.
What is the block size of the writes that are being performed? You can
expect better throughput and lower latency with block sizes that are
close to or greater than 128KB.
-Vijay
_______________________________________________
Gluster-devel mailing list
Gluster-devel@xxxxxxxxxxx
http://supercolony.gluster.org/mailman/listinfo/gluster-devel