Hello,
Here is some speedtest with a new setup we just made
with gluster 3.10, there are no other differences,
except glusterfs versus nfs. The nfs is about 80 times
faster:
root@app1:~/smallfile-master# mount -t glusterfs -o
use-readdirp=no,log-level=WARNING,log-file=/var/log/glusterxxx.log
192.168.140.41:/www /var/www
root@app1:~/smallfile-master# ./smallfile_cli.py
--top /var/www/test --host-set 192.168.140.41
--threads 8 --files 500 --file-size 64 --record-size
64
smallfile version 3.0
hosts in test :
['192.168.140.41']
top test directory(s) :
['/var/www/test']
operation : cleanup
files/thread : 500
threads : 8
record size (KB, 0 = maximum) : 64
file size (KB) : 64
file size distribution : fixed
files per dir : 100
dirs per dir : 10
threads share directories? : N
filename prefix :
filename suffix :
hash file number into dir.? : N
fsync after modify? : N
pause between files (microsec) : 0
finish all requests? : Y
stonewall? : Y
measure response times? : N
verify read? : Y
verbose? : False
log to stderr? : False
ext.attr.size : 0
ext.attr.count : 0
permute host directories? : N
remote program directory :
/root/smallfile-master
network thread sync. dir. :
/var/www/test/network_shared
starting all threads by creating starting gate file
/var/www/test/network_shared/starting_gate.tmp
host = 192.168.140.41,thr = 00,elapsed =
68.845450,files = 500,records = 0,status = ok
host = 192.168.140.41,thr = 01,elapsed =
67.601088,files = 500,records = 0,status = ok
host = 192.168.140.41,thr = 02,elapsed =
58.677994,files = 500,records = 0,status = ok
host = 192.168.140.41,thr = 03,elapsed =
65.901922,files = 500,records = 0,status = ok
host = 192.168.140.41,thr = 04,elapsed =
66.971720,files = 500,records = 0,status = ok
host = 192.168.140.41,thr = 05,elapsed =
71.245102,files = 500,records = 0,status = ok
host = 192.168.140.41,thr = 06,elapsed =
67.574845,files = 500,records = 0,status = ok
host = 192.168.140.41,thr = 07,elapsed =
54.263242,files = 500,records = 0,status = ok
total threads = 8
total files = 4000
100.00% of requested files processed, minimum is
70.00
71.245102 sec elapsed time
56.144211 files/sec
umount /var/www
root@app1:~/smallfile-master# mount -t nfs -o tcp
192.168.140.41:/www /var/www
root@app1:~/smallfile-master# ./smallfile_cli.py
--top /var/www/test --host-set 192.168.140.41
--threads 8 --files 500 --file-size 64 --record-size
64
smallfile version 3.0
hosts in test :
['192.168.140.41']
top test directory(s) :
['/var/www/test']
operation : cleanup
files/thread : 500
threads : 8
record size (KB, 0 = maximum) : 64
file size (KB) : 64
file size distribution : fixed
files per dir : 100
dirs per dir : 10
threads share directories? : N
filename prefix :
filename suffix :
hash file number into dir.? : N
fsync after modify? : N
pause between files (microsec) : 0
finish all requests? : Y
stonewall? : Y
measure response times? : N
verify read? : Y
verbose? : False
log to stderr? : False
ext.attr.size : 0
ext.attr.count : 0
permute host directories? : N
remote program directory :
/root/smallfile-master
network thread sync. dir. :
/var/www/test/network_shared
starting all threads by creating starting gate file
/var/www/test/network_shared/starting_gate.tmp
host = 192.168.140.41,thr = 00,elapsed =
0.962424,files = 500,records = 0,status = ok
host = 192.168.140.41,thr = 01,elapsed =
0.942673,files = 500,records = 0,status = ok
host = 192.168.140.41,thr = 02,elapsed =
0.940622,files = 500,records = 0,status = ok
host = 192.168.140.41,thr = 03,elapsed =
0.915218,files = 500,records = 0,status = ok
host = 192.168.140.41,thr = 04,elapsed =
0.934349,files = 500,records = 0,status = ok
host = 192.168.140.41,thr = 05,elapsed =
0.922466,files = 500,records = 0,status = ok
host = 192.168.140.41,thr = 06,elapsed =
0.954381,files = 500,records = 0,status = ok
host = 192.168.140.41,thr = 07,elapsed =
0.946127,files = 500,records = 0,status = ok
total threads = 8
total files = 4000
100.00% of requested files processed, minimum is
70.00
0.962424 sec elapsed time
4156.173189 files/sec
-----Original
message-----
From: Jo Goossens <jo.goossens@xxxxxxxxxxxxxxxx>
Sent: Tue 11-07-2017 11:26
Subject: Re: Gluster
native mount is really slow compared to nfs
To: gluster-users@xxxxxxxxxxx;
Soumya Koduri <skoduri@xxxxxxxxxx>;
CC: Ambarish Soman <asoman@xxxxxxxxxx>;
Hi all,
One more thing, we have 3 apps servers with the
gluster on it, replicated on 3 different gluster
nodes. (So the gluster nodes are app servers at
the same time). We could actually almost work
locally if we wouldn't need to have the same files
on the 3 nodes and redundancy :)
Initial cluster was created like this:
gluster volume create www replica 3 transport
tcp 192.168.140.41:/gluster/www
192.168.140.42:/gluster/www
192.168.140.43:/gluster/www force
gluster volume set www network.ping-timeout 5
gluster volume set www performance.cache-size
1024MB
gluster volume set www nfs.disable on # No need
for NFS currently
gluster volume start www
To my understanding it still wouldn't explain
why nfs has such great performance compared to
native ...
Regards
Jo
-----Original
message-----
From: Soumya Koduri <skoduri@xxxxxxxxxx>
Sent: Tue 11-07-2017 11:16
Subject: Re:
Gluster native mount is really slow compared to
nfs
To: Jo Goossens <jo.goossens@xxxxxxxxxxxxxxxx>;
gluster-users@xxxxxxxxxxx;
CC: Ambarish Soman <asoman@xxxxxxxxxx>;
Karan Sandha <ksandha@xxxxxxxxxx>;
+ Ambarish
On 07/11/2017 02:31 PM, Jo Goossens wrote:
> Hello,
>
>
>
>
>
> We tried tons of settings to get a php app
running on a native gluster
> mount:
>
>
>
> e.g.: 192.168.140.41:/www /var/www glusterfs
>
defaults,_netdev,backup-volfile-servers=192.168.140.42:192.168.140.43,direct-io-mode=disable
> 0 0
>
>
>
> I tried some mount variants in order to speed
up things without luck.
>
>
>
>
>
> After that I tried nfs (native gluster nfs 3
and ganesha nfs 4), it was
> a crazy performance difference.
>
>
>
> e.g.: 192.168.140.41:/www /var/www nfs4
defaults,_netdev 0 0
>
>
>
> I tried a test like this to confirm the
slowness:
>
>
>
> ./smallfile_cli.py --top /var/www/test
--host-set 192.168.140.41
> --threads 8 --files 5000 --file-size 64
--record-size 64
>
> This test finished in around 1.5 seconds with
NFS and in more than 250
> seconds without nfs (can't remember exact
numbers, but I reproduced it
> several times for both).
>
> With the native gluster mount the php app had
loading times of over 10
> seconds, with the nfs mount the php app
loaded around 1 second maximum
> and even less. (reproduced several times)
>
>
>
> I tried all kind of performance settings and
variants of this but not
> helped , the difference stayed huge, here are
some of the settings
> played with in random order:
>
Request Ambarish & Karan (cc'ed who have been
working on evaluating
performance of various access protocols gluster
supports) to look at the
below settings and provide inputs.
Thanks,
Soumya
>
>
> gluster volume set www
features.cache-invalidation on
> gluster volume set www
features.cache-invalidation-timeout 600
> gluster volume set www
performance.stat-prefetch on
> gluster volume set www
performance.cache-samba-metadata on
> gluster volume set www
performance.cache-invalidation on
> gluster volume set www
performance.md-cache-timeout 600
> gluster volume set www
network.inode-lru-limit 250000
>
> gluster volume set www
performance.cache-refresh-timeout 60
> gluster volume set www performance.read-ahead
disable
> gluster volume set www
performance.readdir-ahead on
> gluster volume set www
performance.parallel-readdir on
> gluster volume set www
performance.write-behind-window-size 4MB
> gluster volume set www
performance.io-thread-count 64
>
> gluster volume set www
performance.client-io-threads on
>
> gluster volume set www performance.cache-size
1GB
> gluster volume set www performance.quick-read
on
> gluster volume set www
performance.flush-behind on
> gluster volume set www
performance.write-behind on
> gluster volume set www nfs.disable on
>
> gluster volume set www client.event-threads 3
> gluster volume set www server.event-threads 3
>
>
>
>
>
>
> The NFS ha adds a lot of complexity which we
wouldn't need at all in our
> setup, could you please explain what is going
on here? Is NFS the only
> solution to get acceptable performance? Did I
miss one crucial settting
> perhaps?
>
>
>
> We're really desperate, thanks a lot for your
help!
>
>
>
>
>
> PS: We tried with gluster 3.11 and 3.8 on
Debian, both had terrible
> performance when not used with nfs.
>
>
>
>
>
>
>
> Kind regards
>
> Jo Goossens
>
>
>
>
>
>
>
>
>
>
_______________________________________________
> Gluster-users mailing list
> Gluster-users@xxxxxxxxxxx
> http://lists.gluster.org/mailman/listinfo/gluster-users
>
_______________________________________________ Gluster-users mailing list Gluster-users@xxxxxxxxxxx http://lists.gluster.org/mailman/listinfo/gluster-users