Re: performance due to network?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




Thanks Erik for the quick reply.

My bad, I thought it is over NFS. :(

1. Many layers can impact I/O performance e.g.

a. Disk subsystem
b. RAID controller
c. Underlying file system (in brick servers)
d. Network (NIC driver and TCP/IP stack)

Are these tuned as per the RHEL tuning guide? What OS is is use?

2. Could you set the following option to check if this improves the performance.

gluster set volume <volume name> server.outstanding-rpc-limit  128

(or 256 or 512)

3. Is your large file copy finishes? Could you try with 5g (or smaller) file which would finish?

4. Please share the dmesg output.
5. Share the cat /proc/mounts output.

Thanks,
Santosh





On 06/13/2014 06:52 PM, Aronesty, Erik wrote:

I have not tried to use NFS.

 

From: Santosh Pradhan [mailto:spradhan@xxxxxxxxxx]
Sent: Friday, June 13, 2014 9:22 AM
To: Aronesty, Erik; Pranith Kumar Karampuri; gluster-users@xxxxxxxxxxx
Subject: Re: performance due to network?

 

Hi Erik,
Could you just turn the DRC off and retry your test case?

1. Turn the DRC off:
gluster volume set <volume name> nfs.drc off

2. Restart all the gluster processes
a. killall glusterd glusterfs glusterfsd
b. glusterd

2.b should bring back all the gluster proc's.

3. Retry your large copy test.

Thanks,
Santosh

On 06/13/2014 05:16 PM, Aronesty, Erik wrote:

glusterfs 3.5.0 built on Apr 24 2014 01:38:34

 

From: Pranith Kumar Karampuri [mailto:pkarampu@xxxxxxxxxx]
Sent: Friday, June 13, 2014 1:21 AM
To: Aronesty, Erik; gluster-users@xxxxxxxxxxx
Subject: Re: performance due to network?

 

Erik,
What version of glusterfs are you using?

Pranith

On 06/13/2014 02:09 AM, Aronesty, Erik wrote:

I suspect I'm having performance issues because of network speeds.

 

Supposedly I have 10gbit connections on all my NAS devices, however, it seems to me that the fastest I can write is 1Gbit.   When I'm copying very large files, etc, I see 'D' as the cp waits to I/O, but when I go the gluster servers, I don't see glusterfsd waiting (D) to write to the bricks themselves.  I have 4 nodes, each with  10Gbit connection, each has 2 Areca RAID controllers with 12 disk raid5, and the 2 controllers stripped into 1 large volume.   Pretty sure there's plenty of i/o left on the bricks themselves.

 

Is it possible that "one big file" isn't the right test… should I try 20 big files, and see how saturated my network can get?

 

Erik Aronesty
Senior Bioinformatics Architect

EA | Quintiles
Genomic Services

4820 Emperor Boulevard

Durham, NC 27703 USA


Office: + 919.287.4011
erik.aronesty@xxxxxxxxxxxxx

www.quintiles.com  
www.expressionanalysis.com
cid:image001.jpg@01CDEF4B.84C3E9F0 cid:image002.jpg@01CDEF4B.84C3E9F0 cid:image003.jpg@01CDEF4B.84C3E9F0

 





_______________________________________________
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
http://supercolony.gluster.org/mailman/listinfo/gluster-users

 




_______________________________________________
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
http://supercolony.gluster.org/mailman/listinfo/gluster-users

 


_______________________________________________
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
http://supercolony.gluster.org/mailman/listinfo/gluster-users

[Index of Archives]     [Gluster Development]     [Linux Filesytems Development]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux