GlusterFS FUSE Client Performance Issues

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Both the client and the server are running Ubuntu 14.04 with GlusterFS 3.7 from Ubuntu PPA

I am going to use Gluster to create a simple replicated NFS server. I was hoping to use the Native FUSE client to also get seamless fail over but am running into performance issue that are going to prevent me from doing so.

I have replicated Gluster volume on a 24 core server with 128GB RAM, 10GBe networking and Raid-10 served via ZFS.

From a remote client I mount the same volume via NFS and the native client.

I did some really basic performance tests just to get a feel for what penalty the user space client would incur.

I must admit I was shocked at how "poor" the Gluster FUSE client performed. I know that small block sizes are not Glusters favorite but even at larger ones the penalty is pretty great.

Is this to be expected or is there some configuration that I am missing?

If providing any more info would be helpful - please let me know.

Thanks!

root@vc1test001 /root 489# mount -t nfs dc1strg001x:/zfspool/glusterfs/backups /mnt/backups_nfs root@vc1test001 /root 490# mount -t glusterfs dc1strg001x:backups /mnt/backups_gluster

root@vc1test001 /mnt/backups_nfs 492# dd if=/dev/zero of=testfile bs=16k count=16384
16384+0 records in
16384+0 records out
268435456 bytes (268 MB) copied, 2.6763 s, 100 MB/s

root@vc1test001 /mnt/backups_nfs 510# dd if=/dev/zero of=testfile1 bs=64k count=16384
16384+0 records in
16384+0 records out
1073741824 bytes (1.1 GB) copied, 10.7434 s, 99.9 MB/s

root@vc1test001 /mnt/backups_nfs 517# dd if=/dev/zero of=testfile1 bs=128k count=16384
16384+0 records in
16384+0 records out
2147483648 bytes (2.1 GB) copied, 19.0354 s, 113 MB/s

root@vc1test001 /mnt/backups_gluster 495# dd if=/dev/zero of=testfile bs=16k count=16384
16384+0 records in
16384+0 records out
268435456 bytes (268 MB) copied, 102.058 s, 2.6 MB/s

root@vc1test001 /mnt/backups_gluster 513# dd if=/dev/zero of=testfile1 bs=64k count=16384
16384+0 records in
16384+0 records out
1073741824 bytes (1.1 GB) copied, 114.053 s, 9.4 MB/s

root@vc1test001 /mnt/backups_gluster 514# dd if=/dev/zero of=testfile1 bs=128k count=16384
16384+0 records in
16384+0 records out
2147483648 bytes (2.1 GB) copied, 123.904 s, 17.3 MB/s

root@vc1test001 /tmp 504# rsync -av --progress testfile1 /mnt/backups_nfs/
sending incremental file list
testfile1
  1,073,741,824 100%   89.49MB/s    0:00:11 (xfr#1, to-chk=0/1)

sent 1,074,004,057 bytes  received 35 bytes  74,069,247.72 bytes/sec
total size is 1,073,741,824  speedup is 1.00

root@vc1test001 /tmp 505# rsync -av --progress testfile1 /mnt/backups_gluster/
sending incremental file list
testfile1
  1,073,741,824 100%   25.94MB/s    0:00:39 (xfr#1, to-chk=0/1)

sent 1,074,004,057 bytes  received 35 bytes  27,189,977.01 bytes/sec
total size is 1,073,741,824  speedup is 1.00

_______________________________________________
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
http://www.gluster.org/mailman/listinfo/gluster-users



[Index of Archives]     [Gluster Development]     [Linux Filesytems Development]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux