Re: Fw: Re: Corvid gluster testing

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 08/06/2014 12:11 AM, David F. Robinson wrote:
I have been testing some of the fixes that Pranith incorporated into the
3.5.2-beta to see how they performed for moderate levels of i/o. All of
the stability issues that I had seen in previous versions seem to have
been fixed in 3.5.2; however, there still seem to be some significant
performance issues.  Pranith suggested that I send this to the
gluster-devel email list, so here goes:
I am running an MPI job that saves a restart file to the gluster file
system.  When I use the following in my fstab to mount the gluster
volume, the i/o time for the 2.5GB file is roughly 45-seconds.
/    gfsib01a.corvidtec.com:/homegfs /homegfs glusterfs
transport=tcp,_netdev 0 0
/
When I switch this to use the NFS protocol (see below), the i/o time is
2.5-seconds.
/  gfsib01a.corvidtec.com:/homegfs /homegfs nfs
vers=3,intr,bg,rsize=32768,wsize=32768 0 0/
The read-times for gluster are 10-20% faster than NFS, but the write
times are almost 20x slower.

What is the block size of the writes that are being performed? You can expect better throughput and lower latency with block sizes that are close to or greater than 128KB.

-Vijay

_______________________________________________
Gluster-devel mailing list
Gluster-devel@xxxxxxxxxxx
http://supercolony.gluster.org/mailman/listinfo/gluster-devel




[Index of Archives]     [Gluster Users]     [Ceph Users]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Security]     [Bugtraq]     [Linux]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux