Re: Slow performance over rsync in Replicated-Distributed Setup

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Dear Shubhank,

this small file performance appears to be slow on glusterfs usually.

Can you provide more details according to your setup? (zfs settings, bonding, tuned-adm profile, etc, ...)


From a gluster point of view, setting performance.write-behind-window to 128MB increases performance.

I was able to hit the cpu limit using smallfile benchmark tool (available on github) and native glusterfs-client with that knob.


Furthermore, throughput increases if you increase the number of rsync processes ( -> github -> msrsync works well here).


Regards,

Felix


On 06/03/2021 15:27, Shubhank Gaur wrote:
Hello users,

I have started using gluster just a few weeks ago and I am rocking a Replicated-Distributed setup with arbiters (A) and SATA Volumes (V). I have around 6 volumes and 3 arbiters in this setup: 
V+V+A | V+V+A | V+V+A  

All these volumes are spread across 3 different nodes, all of them being 1Gbit. Due to hardware limitations, SSD or 10Gbit network is not available.  

But even then, testing via iperf and normal rsync of files between servers, I am easily able to achieve 700Mbps~  
[ ID] Interval           Transfer     Bandwidth       Retr  Cwnd
[  4]   0.00-1.00   sec  49.9 MBytes   419 Mbits/sec   21    132 KBytes
[  4]   1.00-2.00   sec  80.0 MBytes   671 Mbits/sec    0    214 KBytes
[  4]   2.00-3.00   sec  87.0 MBytes   730 Mbits/sec    3    228 KBytes
[  4]   3.00-4.00   sec  91.6 MBytes   769 Mbits/sec   15    215 KBytes

But when rsyncing data from same server to another node with mounted glusterVolume, I am getting measly 50Mbps (7MBps).  

All servers have 64GB Ram and their memory usage is around 50% and CPU usage less than 10%.  
All bricks are zfs volumes, no Raid setup or anything.  All volumes are direct hard disks formatted as ZFS (JBOD setup).


My Gluster Vol Info

gluster vol info

Volume Name: glusterStore
Type: Distributed-Replicate
Volume ID: c7ac8094-f379-45fc-8cfd-f2937355e03d
Status: Started
Snapshot Count: 0
Number of Bricks: 3 x (2 + 1) = 9
Transport-type: tcp
Bricks:
Brick1: 62.0.0.1:/zpool1/proxmox
Brick2: 5.0.0.1:/zpool1/proxmox
Brick3: 62.0.0.1 :/home/glusterArbiter (arbiter)
Brick4: 62.0.0.1 :/zpool2/proxmox
Brick5: 5.0.0.1 :/zpool2/proxmox
Brick6: 62.0.0.2:/home/glusterArbiter2 (arbiter)
Brick7: 62.0.0.2:/zpool/proxmox
Brick8: 5.0.0.1 :/zpool3/proxmox
Brick9: 62.0.0.2:/home/glusterArbiter (arbiter)
Options Reconfigured:
performance.readdir-ahead: enable
cluster.rsync-hash-regex: none
client.event-threads: 16
server.event-threads: 16
network.ping-timeout: 5
performance.normal-prio-threads: 64
performance.high-prio-threads: 64
performance.io-thread-count: 64
performance.cache-size: 1GB
performance.read-ahead: off
performance.io-cache: off
performance.flush-behind: off
performance.quick-read: on
network.frame-timeout: 60
storage.batch-fsync-delay-usec: 0
server.allow-insecure: on
performance.stat-prefetch: off
cluster.lookup-optimize: on
performance.write-behind: on
cluster.granular-entry-heal: on
storage.fips-mode-rchecksum: on
transport.address-family: inet
nfs.disable: on
performance.client-io-threads: off


Regards

________



Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://meet.google.com/cpu-eiue-hvk
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
https://lists.gluster.org/mailman/listinfo/gluster-users
________



Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://meet.google.com/cpu-eiue-hvk
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
https://lists.gluster.org/mailman/listinfo/gluster-users

[Index of Archives]     [Gluster Development]     [Linux Filesytems Development]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux