hi,
thanks for the reply. I have hardware raid 5 storage servers with 4TB WD red drives. I think they are capable of 6GB/s transfers so it shouldnt be a drive speed issue. Just for testing i tried to do a dd test directy into the brick mounted from the storage server itself and got around 800mb/s transfer rate which is double what i get when the brick is mounted on the client. Are there any other options or tests that i can perform to figure out the root cause of my problem as i have exhaused most google searches and tests.
Kaamesh
On Wed, Aug 3, 2016 at 10:58 PM, Leno Vo <lenovolastname@xxxxxxxxx> wrote:
your 10G nic is capable, the problem is the disk speed, fix ur disk speed first, use ssd or sshd or sas 15k in a raid 0 or raid 5/6 x4 at least._______________________________________________Hi ,I have gluster 3.6.2 installed on my server network. Due to internal issues we are not allowed to upgrade the gluster version. All the clients are on the same version of gluster. When transferring files to/from the clients or between my nodes over the 10gb network, the transfer rate is capped at 450Mb/s .Is there any way to increase the transfer speeds for gluster mounts?Our server setup is as following:2 gluster servers -gfs1 and gfs2volume name : gfsvolume3 clients - hpc1, hpc2,hpc3gluster volume mounted on /export/gfsmount/The following is the average results what i did so far:1) test bandwith with iperf between all machines - 9.4 GiB/s2) test write speed with dddd if=/dev/zero of=/export/gfsmount/testfile bs=1G count=1result=399Mb/s3) test read speed with dddd if=/export/gfsmount/testfile of=/dev/zero bs=1G count=1result=284MB/sMy gluster volume configuration:Volume Name: gfsvolumeType: ReplicateVolume ID: a29bd2fb-b1ef-4481-be10-c2f4faf4059bStatus: StartedNumber of Bricks: 1 x 2 = 2Transport-type: tcpBricks:Brick1: gfs1:/export/sda/brickBrick2: gfs2:/export/sda/brickOptions Reconfigured:performance.quick-read: offnetwork.ping-timeout: 30network.frame-timeout: 90performance.cache-max-file-size: 2MBcluster.server-quorum-type: nonenfs.addr-namelookup: offnfs.trusted-write: offperformance.write-behind-window-size: 4MBcluster.data-self-heal-algorithm: diffperformance.cache-refresh-timeout: 60performance.cache-size: 1GBcluster.quorum-type: fixedauth.allow: 172.*cluster.quorum-count: 1diagnostics.latency-measurement: ondiagnostics.count-fop-hits: oncluster.server-quorum-ratio: 50%Any help would be appreciated.Thanks,Kaamesh
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
http://www.gluster.org/mailman/listinfo/gluster-users
_______________________________________________ Gluster-users mailing list Gluster-users@xxxxxxxxxxx http://www.gluster.org/mailman/listinfo/gluster-users