Write speed issues with 16MB files and using Gluster fuse mount vs NFS

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi all,

 

We were initially planning on using NFS mounts for our gluster deployment to reduce the amount of client-side changes we had to make to swap out our existing NFS solution.   I ran into an HA issue with NFS (see my other post from this morning), so started looking into the Fuse client as an alternative.  What I found is that I can’t seem to get above ~30MB/s with a gluster mount transferring 16MB files, even though a NFS mount form the same client to the same volume reaches ~165MB/s.   This seems like a much more drastic difference that I’ve seen others mention, but I don’t have a good feel for how to troubleshoot this.  Any advice?

 

Below shows what I’m seeing.     nfs03-poc is the client, and gfs-slc02.mgmt and gfs-slc03.mgmt are the gluster nodes serving the data via RRDNS at gfs-slc.mgmt.   Please let me know if any other info would be helpful.

 

--

 

On the client side:

 

[root@nfs03-poc fio]# cat GlusterConfig.fio

[global]

description=fio load testing...simulating disk activity for writing the WAL file

blocksize=8192

filesize=16m

fsync=0

iomem=malloc

loops=1

numjobs=1

prio=0

runtime=1m

rw=write

size=200M

startdelay=0

time_based

 

[wal01]

directory=/mnt/gluster/fio_test

ioengine=sync

nrfiles=1

 

[wal02]

directory=/mnt/gluster/fio_test

ioengine=sync

nrfiles=1

 

[wal03]

directory=/mnt/gluster/fio_test

ioengine=sync

nrfiles=1

 

..

..

..

 

[wal18]

directory=/mnt/gluster/fio_test

ioengine=sync

nrfiles=1

 

[wal19]

directory=/mnt/gluster/fio_test

ioengine=sync

nrfiles=1

 

[wal20]

directory=/mnt/gluster/fio_test

ioengine=sync

nrfiles=1

 

 

[root@nfs03-poc fio]# iperf -c gfs-slc02.mgmt

------------------------------------------------------------

Client connecting to gfs-slc02.mgmt, TCP port 5001

TCP window size: 19.3 KByte (default)

------------------------------------------------------------

[  3] local 10.124.10.18 port 58877 connected with 10.124.0.91 port 5001

[ ID] Interval       Transfer     Bandwidth

[  3]  0.0-10.0 sec  9.96 GBytes  8.56 Gbits/sec

 

 

[root@nfs03-poc fio]# iperf -c gfs-slc03.mgmt

------------------------------------------------------------

Client connecting to gfs-slc03.mgmt, TCP port 5001

TCP window size: 19.3 KByte (default)

------------------------------------------------------------

[  3] local 10.124.10.18 port 46043 connected with 10.124.0.88 port 5001

[ ID] Interval       Transfer     Bandwidth

[  3]  0.0-10.0 sec  8.69 GBytes  7.47 Gbits/sec

 

 

[root@nfs03-poc fio]# umount /mnt/gluster/

[root@nfs03-poc fio]# mount -t nfs -o async gfs-slc.mgmt:/dr_opt_nwea /mnt/gluster/

[root@nfs03-poc fio]# mount | grep gluster

gfs-slc.mgmt:/dr_opt_nwea on /mnt/gluster type nfs (rw,addr=10.124.0.151)

[root@nfs03-poc fio]# fio GlusterConfig.fio -output=Gluster1.txt

[root@nfs03-poc fio]# tail -n 1 Gluster1.txt.0% done] [0K/176.6M/0K /s] [0 /22.6K/0  iops] [eta 00m:00s]

  WRITE: io=9749.1MB, aggrb=166113KB/s, minb=7034KB/s, maxb=9929KB/s, mint=60015msec, maxt=60103msec

 

 

[root@nfs03-poc fio]# umount /mnt/gluster/

[root@nfs03-poc fio]# mount -t glusterfs -o direct-io-mode=disable gfs-slc.mgmt:/dr_opt_nwea /mnt/gluster

[root@nfs03-poc fio]# mount | grep gluster

gfs-slc.mgmt:/dr_opt_nwea on /mnt/gluster type fuse.glusterfs (rw,default_permissions,allow_other,max_read=131072)

[root@nfs03-poc fio]# fio GlusterConfig.fio -output=Gluster2.txt

[root@nfs03-poc fio]# tail -n 1 Gluster2.txt.0% done] [0K/33088K/0K /s] [0 /4136 /0  iops] [eta 00m:00s]

  WRITE: io=1756.8MB, aggrb=29975KB/s, minb=1498KB/s, maxb=1499KB/s, mint=60001msec, maxt=60012msec

 

 

 

On the gluster server side:

 

[root@xxxxxxxxxxxxxx ~]$ gluster --version

glusterfs 3.7.5 built on Oct  7 2015 16:27:05

 

[root@xxxxxxxxxxxxxx ~]$ gluster volume info dr_opt_nwea

Volume Name: dr_opt_nwea

Type: Replicate

Volume ID: 6ecf855e-e7e0-4479-bde9-f32e5e820365

Status: Started

Number of Bricks: 1 x 2 = 2

Transport-type: tcp

Bricks:

Brick1: gfs-slc02.mgmt:/data/glusterfs/dr_opt_nwea_brick1/brick1

Brick2: gfs-slc03.mgmt:/data/glusterfs/dr_opt_nwea_brick1/brick1

Options Reconfigured:

performance.readdir-ahead: on

nfs.export-volumes: on

nfs.addr-namelookup: Off

nfs.disable: off

network.ping-timeout: 5

cluster.server-quorum-type: server

cluster.server-quorum-ratio: 51%

 

[root@xxxxxxxxxxxxxx ~]$ dd if=/dev/zero of=/data/glusterfs/dr_opt_nwea_brick1/brick1/ddtest bs=4k count=10000

10000+0 records in

10000+0 records out

40960000 bytes (41 MB) copied, 0.0311661 s, 1.3 GB/s

 

[root@xxxxxxxxxxxxxx ~]$ dd if=/dev/zero of=/data/glusterfs/dr_opt_nwea_brick1/brick1/ddtest bs=4k count=10000

10000+0 records in

10000+0 records out

40960000 bytes (41 MB) copied, 0.0445559 s, 919 MB/s

 

--- 

 

I’ve tried with the following volume settings as well, which makes little to no difference on the gluster mount test:

 

performance.write-behind-window-size: 1MB

performance.io-cache: on

performance.client-io-threads: on

performance.cache-size: 512MB

performance.io-thread-count: 64

 

Ideas?

 

Thanks,

Kris

 

_______________________________________________
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
http://www.gluster.org/mailman/listinfo/gluster-users

[Index of Archives]     [Gluster Development]     [Linux Filesytems Development]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux