Re: GlusterFS FUSE Client Performance Issues

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



glusterfs-3.7.8 has a performance regression for fuse mounts being tracked at https://bugzilla.redhat.com/show_bug.cgi?id=1309462 We'll fix it for the next release. Can you check if `gluster volume set <VOLNAME> data-self-heal off` makes a difference for you?

Thanks,
Ravi


On 02/27/2016 02:37 AM, Mark Selby wrote:
I think I should provide some additional info

To be more explicit the volumes are replicated volume created with the command

gluster volume create $VOL replica 2 dc1strg001x:/zfspool/glusterfs/$VOL/data dc1strg002x:/zfspool/glusterfs/$VOL/data

I also decided to use "real" file for the testing and came up with some different results

linux-lts-raring.tar is just a tarfile of a whole bunch of binaries. When I control the blocksize and use large ones I very much close the performance gap with NFS.

When I do not control  block sizes (rsync) I take ~%50 performance hit.

Someone told me that when I use the Gluster FUSE client against a replicated volume that I am actually writing the data twice - once to each brick - which would very much make sense that writes to NFS are faster since data would be written only to one server and then they would replicated between each other.

Does anyone have any overall suggestions about using the GlusterFS client as a general purpose network store vs the NFS client?

My feeling right now is I am just going to have to try it with real world load and see if the write performance loss is acceptable

Thanks!


root@vc1test001 /tmp 570# dd if=linux-lts-raring.tar of=/mnt/backups_nfs/linux-lts-raring.tar bs=64M count=256
42+1 records in
42+1 records out
2851440640 bytes (2.9 GB) copied, 54.6371 s, 52.2 MB/s

root@vc1test001 /tmp 571# dd if=linux-lts-raring.tar of=/mnt/backups_gluster/linux-lts-raring.tar bs=64M count=256
42+1 records in
42+1 records out
2851440640 bytes (2.9 GB) copied, 61.8533 s, 46.1 MB/s


root@vc1test001 /tmp 564# rsync -av --progress linux-lts-raring.tar /mnt/backups_nfs/
sending incremental file list
linux-lts-raring.tar
  2,851,440,640 100%   43.63MB/s    0:01:02 (xfr#1, to-chk=0/1)

sent 2,852,136,896 bytes  received 35 bytes  44,219,177.22 bytes/sec
total size is 2,851,440,640  speedup is 1.00

root@vc1test001 /tmp 565# rsync -av --progress linux-lts-raring.tar /mnt/backups_gluster/
sending incremental file list
linux-lts-raring.tar
  2,851,440,640 100%   22.33MB/s    0:02:01 (xfr#1, to-chk=0/1)

sent 2,852,136,896 bytes  received 35 bytes  23,282,750.46 bytes/sec
total size is 2,851,440,640  speedup is 1.00




On 2/26/16 9:45 AM, Mark Selby wrote:
Both the client and the server are running Ubuntu 14.04 with GlusterFS 3.7 from Ubuntu PPA

I am going to use Gluster to create a simple replicated NFS server. I was hoping to use the Native FUSE client to also get seamless fail over but am running into performance issue that are going to prevent me from doing so.

I have replicated Gluster volume on a 24 core server with 128GB RAM, 10GBe networking and Raid-10 served via ZFS.

From a remote client I mount the same volume via NFS and the native client.

I did some really basic performance tests just to get a feel for what penalty the user space client would incur.

I must admit I was shocked at how "poor" the Gluster FUSE client performed. I know that small block sizes are not Glusters favorite but even at larger ones the penalty is pretty great.

Is this to be expected or is there some configuration that I am missing?

If providing any more info would be helpful - please let me know.

Thanks!

root@vc1test001 /root 489# mount -t nfs dc1strg001x:/zfspool/glusterfs/backups /mnt/backups_nfs root@vc1test001 /root 490# mount -t glusterfs dc1strg001x:backups /mnt/backups_gluster

root@vc1test001 /mnt/backups_nfs 492# dd if=/dev/zero of=testfile bs=16k count=16384
16384+0 records in
16384+0 records out
268435456 bytes (268 MB) copied, 2.6763 s, 100 MB/s

root@vc1test001 /mnt/backups_nfs 510# dd if=/dev/zero of=testfile1 bs=64k count=16384
16384+0 records in
16384+0 records out
1073741824 bytes (1.1 GB) copied, 10.7434 s, 99.9 MB/s

root@vc1test001 /mnt/backups_nfs 517# dd if=/dev/zero of=testfile1 bs=128k count=16384
16384+0 records in
16384+0 records out
2147483648 bytes (2.1 GB) copied, 19.0354 s, 113 MB/s

root@vc1test001 /mnt/backups_gluster 495# dd if=/dev/zero of=testfile bs=16k count=16384
16384+0 records in
16384+0 records out
268435456 bytes (268 MB) copied, 102.058 s, 2.6 MB/s

root@vc1test001 /mnt/backups_gluster 513# dd if=/dev/zero of=testfile1 bs=64k count=16384
16384+0 records in
16384+0 records out
1073741824 bytes (1.1 GB) copied, 114.053 s, 9.4 MB/s

root@vc1test001 /mnt/backups_gluster 514# dd if=/dev/zero of=testfile1 bs=128k count=16384
16384+0 records in
16384+0 records out
2147483648 bytes (2.1 GB) copied, 123.904 s, 17.3 MB/s

root@vc1test001 /tmp 504# rsync -av --progress testfile1 /mnt/backups_nfs/
sending incremental file list
testfile1
  1,073,741,824 100%   89.49MB/s    0:00:11 (xfr#1, to-chk=0/1)

sent 1,074,004,057 bytes  received 35 bytes  74,069,247.72 bytes/sec
total size is 1,073,741,824  speedup is 1.00

root@vc1test001 /tmp 505# rsync -av --progress testfile1 /mnt/backups_gluster/
sending incremental file list
testfile1
  1,073,741,824 100%   25.94MB/s    0:00:39 (xfr#1, to-chk=0/1)

sent 1,074,004,057 bytes  received 35 bytes  27,189,977.01 bytes/sec
total size is 1,073,741,824  speedup is 1.00

_______________________________________________
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
http://www.gluster.org/mailman/listinfo/gluster-users

_______________________________________________
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
http://www.gluster.org/mailman/listinfo/gluster-users


_______________________________________________
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
http://www.gluster.org/mailman/listinfo/gluster-users



[Index of Archives]     [Gluster Development]     [Linux Filesytems Development]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux