increased latency causes rapid decrease of ftp transfer from/to glusterfs filesystem

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello all,

having installed a 2 glusterfs system on debian 9.x
and using glusterfs packages 5.6-1 we tried to transfer via ftp files from/to
glusterfs filesystem.

While ftp download is around 7.5MB/s, after increasing latency to 10ms (see tc command below) download is decreased rapidly to cca 1.3MB/s.


# ping xx.xx.xx.xx
64 bytes from xx.xx.xx.xx: icmp_seq=1 ttl=64 time=0.426 ms
64 bytes from xx.xx.xx.xx: icmp_seq=2 ttl=64 time=0.443 ms
64 bytes from xx.xx.xx.xx: icmp_seq=3 ttl=64 time=0.312 ms
64 bytes from xx.xx.xx.xx: icmp_seq=4 ttl=64 time=0.373 ms
64 bytes from xx.xx.xx.xx: icmp_seq=5 ttl=64 time=0.415 ms
^C
--- xx.xx.xx.xx ping statistics ---
5 packets transmitted, 5 received, 0% packet loss, time 4100ms
rtt min/avg/max/mdev = 0.312/0.393/0.443/0.053 ms

# tc qdisc add dev eth0 root netem delay 10ms
# ping xx.xx.xx.xx
PING xx.xx.xx.xx (xx.xx.xx.xx) 56(84) bytes of data.
64 bytes from xx.xx.xx.xx: icmp_seq=1 ttl=64 time=10.3 ms
64 bytes from xx.xx.xx.xx: icmp_seq=2 ttl=64 time=10.3 ms
64 bytes from xx.xx.xx.xx: icmp_seq=3 ttl=64 time=10.3 ms
64 bytes from xx.xx.xx.xx: icmp_seq=4 ttl=64 time=10.3 ms
64 bytes from xx.xx.xx.xx: icmp_seq=5 ttl=64 time=10.4 ms
64 bytes from xx.xx.xx.xx: icmp_seq=6 ttl=64 time=10.4 ms
^C
--- xx.xx.xx.xx ping statistics ---
6 packets transmitted, 6 received, 0% packet loss, time 5007ms
rtt min/avg/max/mdev = 10.304/10.387/10.492/0.138 ms

root@server1:~# gluster vol list
GVOLUME
root@server1:~# gluster vol info

Volume Name: GVOLUME
Type: Replicate
Volume ID: xxx
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Bricks:
Brick1: server1.lab:/srv/fs/ftp/brick
Brick2: server2.lab:/srv/fs/ftp/brick
Options Reconfigured:
performance.client-io-threads: off
nfs.disable: off
transport.address-family: inet
features.cache-invalidation: on
performance.stat-prefetch: on
performance.md-cache-timeout: 60
network.inode-lru-limit: 1048576
cluster.quorum-type: auto
performance.cache-max-file-size: 512KB
performance.cache-size: 1GB
performance.flush-behind: on
performance.nfs.flush-behind: on
performance.write-behind-window-size: 512KB
performance.nfs.write-behind-window-size: 512KB
performance.strict-o-direct: off
performance.nfs.strict-o-direct: off
performance.read-after-open: on
performance.io-thread-count: 32
client.event-threads: 4
server.event-threads: 4
performance.write-behind: on
performance.read-ahead: on
performance.readdir-ahead: on
nfs.export-dirs: off
nfs.addr-namelookup: off
nfs.rdirplus: on
features.barrier-timeout: 1
features.trash: off
cluster.quorum-reads: true
auth.allow: 127.0.0.1,xx.xx.xx.xx,xx.xx.xx.yy
auth.reject: all
root@server1:~#

Can somebody help me to tune/solve this issue?
Thanks and kind regards,

peterk

_______________________________________________
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
https://lists.gluster.org/mailman/listinfo/gluster-users

[Index of Archives]     [Gluster Development]     [Linux Filesytems Development]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux