Hi,
I was mistaken about my server specs.
my actual specs for each server are
12 x 4.0TB 3.5" LFF NL-SAS 6G, 128MB, 7.2K rpm HDDs (as Data Store set as RAID 6 achieve 36.0TB usable storage)
not the WD RED as i mentioned earlier. I guess i should have a higher transfer rate with these drives in. 400 MB/s is a bit too slow in my opinion.
Any help i can get will be greatly appreciated as im not sure where i should start debugging this issue
On Fri, Aug 5, 2016 at 2:44 AM, Leno Vo <lenovolastname@xxxxxxxxx> wrote:
i got 1.2 gb/s on seagate sshd ST1000LX001 raid 5 x3 (but with the dreaded cache array on) and 1.1 gb/s on samsung pro ssd 1tb x3 raid5 (no array caching on for it's not compatible on proliant---not enterprise ssd).On Thursday, August 4, 2016 5:23 AM, Kaamesh Kamalaaharan <kaamesh@xxxxxxxxxxxxx> wrote:
hi,thanks for the reply. I have hardware raid 5 storage servers with 4TB WD red drives. I think they are capable of 6GB/s transfers so it shouldnt be a drive speed issue. Just for testing i tried to do a dd test directy into the brick mounted from the storage server itself and got around 800mb/s transfer rate which is double what i get when the brick is mounted on the client. Are there any other options or tests that i can perform to figure out the root cause of my problem as i have exhaused most google searches and tests.KaameshOn Wed, Aug 3, 2016 at 10:58 PM, Leno Vo <lenovolastname@xxxxxxxxx> wrote:your 10G nic is capable, the problem is the disk speed, fix ur disk speed first, use ssd or sshd or sas 15k in a raid 0 or raid 5/6 x4 at least.______________________________Hi ,I have gluster 3.6.2 installed on my server network. Due to internal issues we are not allowed to upgrade the gluster version. All the clients are on the same version of gluster. When transferring files to/from the clients or between my nodes over the 10gb network, the transfer rate is capped at 450Mb/s .Is there any way to increase the transfer speeds for gluster mounts?Our server setup is as following:2 gluster servers -gfs1 and gfs2volume name : gfsvolume3 clients - hpc1, hpc2,hpc3gluster volume mounted on /export/gfsmount/The following is the average results what i did so far:1) test bandwith with iperf between all machines - 9.4 GiB/s2) test write speed with dddd if=/dev/zero of=/export/gfsmount/testfile bs=1G count=1result=399Mb/s3) test read speed with dddd if=/export/gfsmount/testfile of=/dev/zero bs=1G count=1result=284MB/sMy gluster volume configuration:Volume Name: gfsvolumeType: ReplicateVolume ID: a29bd2fb-b1ef-4481-be10-c2f4faf4059b Status: StartedNumber of Bricks: 1 x 2 = 2Transport-type: tcpBricks:Brick1: gfs1:/export/sda/brickBrick2: gfs2:/export/sda/brickOptions Reconfigured:performance.quick-read: offnetwork.ping-timeout: 30network.frame-timeout: 90performance.cache-max-file-size: 2MB cluster.server-quorum-type: nonenfs.addr-namelookup: offnfs.trusted-write: offperformance.write-behind-window-size: 4MB cluster.data-self-heal-algorithm: diff performance.cache-refresh-timeout: 60 performance.cache-size: 1GBcluster.quorum-type: fixedauth.allow: 172.*cluster.quorum-count: 1diagnostics.latency-measurement: on diagnostics.count-fop-hits: oncluster.server-quorum-ratio: 50%Any help would be appreciated.Thanks,Kaamesh_________________
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
http://www.gluster.org/mailman/listinfo/gluster-users
_______________________________________________ Gluster-users mailing list Gluster-users@xxxxxxxxxxx http://www.gluster.org/mailman/listinfo/gluster-users