Re: Gluster native mount is really slow compared to nfs

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



+ Ambarish

On 07/11/2017 02:31 PM, Jo Goossens wrote:
Hello,





We tried tons of settings to get a php app running on a native gluster
mount:



e.g.: 192.168.140.41:/www /var/www glusterfs
defaults,_netdev,backup-volfile-servers=192.168.140.42:192.168.140.43,direct-io-mode=disable
0 0



I tried some mount variants in order to speed up things without luck.





After that I tried nfs (native gluster nfs 3 and ganesha nfs 4), it was
a crazy performance difference.



e.g.: 192.168.140.41:/www /var/www nfs4 defaults,_netdev 0 0



I tried a test like this to confirm the slowness:



./smallfile_cli.py  --top /var/www/test --host-set 192.168.140.41
--threads 8 --files 5000 --file-size 64 --record-size 64

This test finished in around 1.5 seconds with NFS and in more than 250
seconds without nfs (can't remember exact numbers, but I reproduced it
several times for both).

With the native gluster mount the php app had loading times of over 10
seconds, with the nfs mount the php app loaded around 1 second maximum
and even less. (reproduced several times)



I tried all kind of performance settings and variants of this but not
helped , the difference stayed huge, here are some of the settings
played with in random order:


Request Ambarish & Karan (cc'ed who have been working on evaluating performance of various access protocols gluster supports) to look at the below settings and provide inputs.

Thanks,
Soumya



gluster volume set www features.cache-invalidation on
gluster volume set www features.cache-invalidation-timeout 600
gluster volume set www performance.stat-prefetch on
gluster volume set www performance.cache-samba-metadata on
gluster volume set www performance.cache-invalidation on
gluster volume set www performance.md-cache-timeout 600
gluster volume set www network.inode-lru-limit 250000

gluster volume set www performance.cache-refresh-timeout 60
gluster volume set www performance.read-ahead disable
gluster volume set www performance.readdir-ahead on
gluster volume set www performance.parallel-readdir on
gluster volume set www performance.write-behind-window-size 4MB
gluster volume set www performance.io-thread-count 64

gluster volume set www performance.client-io-threads on

gluster volume set www performance.cache-size 1GB
gluster volume set www performance.quick-read on
gluster volume set www performance.flush-behind on
gluster volume set www performance.write-behind on
gluster volume set www nfs.disable on

gluster volume set www client.event-threads 3
gluster volume set www server.event-threads 3






The NFS ha adds a lot of complexity which we wouldn't need at all in our
setup, could you please explain what is going on here? Is NFS the only
solution to get acceptable performance? Did I miss one crucial settting
perhaps?



We're really desperate, thanks a lot for your help!





PS: We tried with gluster 3.11 and 3.8 on Debian, both had terrible
performance when not used with nfs.







Kind regards

Jo Goossens









_______________________________________________
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
http://lists.gluster.org/mailman/listinfo/gluster-users

_______________________________________________
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
http://lists.gluster.org/mailman/listinfo/gluster-users



[Index of Archives]     [Gluster Development]     [Linux Filesytems Development]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux