after upgrading from 3.10.5 version to 3.12.14, I tried to start a rebalance process to distribute data across the bricks, but something goes wrong.
Rebalance failed on different nodes and the time value needed to complete the procedure seems to be very high.
When rebalance processes fail, I can see the following kind of errors in /var/log/glusterfs/tier2-rebalance.log
[2018-09-25 08:09:18.340658] C [rpc-clnt-ping.c:166:rpc_clnt_ping_timer_expired] 0-tier2-client-4: server 192.168.0.52:49153 has not responded in the last 42 seconds, disconnecting.
It seems that there are some network or timeout problems, but the network usage/traffic values are not so high.
Do you think that, in my volume configuration, I have to modify some volume options related to thread and/or network parameters?
(volume is implemented on 6 servers; each server configuration: 2 cpu 10-cores, 64GB RAM, 1 SSD dedicated to the OS, 12 x 10TB HD)
[root@s04 ~]# gluster vol info
Volume Name: tier2
Type: Distributed-Disperse
Volume ID: a28d88c5-3295-4e35-98d4-210b3af9358c
Status: Started
Snapshot Count: 0
Number of Bricks: 12 x (4 + 2) = 72
Transport-type: tcp
Bricks:
Brick1: s01-stg:/gluster/mnt1/brick
Brick2: s02-stg:/gluster/mnt1/brick
Brick3: s03-stg:/gluster/mnt1/brick
Brick4: s01-stg:/gluster/mnt2/brick
Brick5: s02-stg:/gluster/mnt2/brick
Brick6: s03-stg:/gluster/mnt2/brick
Brick7: s01-stg:/gluster/mnt3/brick
Brick8: s02-stg:/gluster/mnt3/brick
Brick9: s03-stg:/gluster/mnt3/brick
Brick10: s01-stg:/gluster/mnt4/brick
Brick11: s02-stg:/gluster/mnt4/brick
Brick12: s03-stg:/gluster/mnt4/brick
Brick13: s01-stg:/gluster/mnt5/brick
Brick14: s02-stg:/gluster/mnt5/brick
Brick15: s03-stg:/gluster/mnt5/brick
Brick16: s01-stg:/gluster/mnt6/brick
Brick17: s02-stg:/gluster/mnt6/brick
Brick18: s03-stg:/gluster/mnt6/brick
Brick19: s01-stg:/gluster/mnt7/brick
Brick20: s02-stg:/gluster/mnt7/brick
Brick21: s03-stg:/gluster/mnt7/brick
Brick22: s01-stg:/gluster/mnt8/brick
Brick23: s02-stg:/gluster/mnt8/brick
Brick24: s03-stg:/gluster/mnt8/brick
Brick25: s01-stg:/gluster/mnt9/brick
Brick26: s02-stg:/gluster/mnt9/brick
Brick27: s03-stg:/gluster/mnt9/brick
Brick28: s01-stg:/gluster/mnt10/brick
Brick29: s02-stg:/gluster/mnt10/brick
Brick30: s03-stg:/gluster/mnt10/brick
Brick31: s01-stg:/gluster/mnt11/brick
Brick32: s02-stg:/gluster/mnt11/brick
Brick33: s03-stg:/gluster/mnt11/brick
Brick34: s01-stg:/gluster/mnt12/brick
Brick35: s02-stg:/gluster/mnt12/brick
Brick36: s03-stg:/gluster/mnt12/brick
Brick37: s04-stg:/gluster/mnt1/brick
Brick38: s04-stg:/gluster/mnt2/brick
Brick39: s04-stg:/gluster/mnt3/brick
Brick40: s04-stg:/gluster/mnt4/brick
Brick41: s04-stg:/gluster/mnt5/brick
Brick42: s04-stg:/gluster/mnt6/brick
Brick43: s04-stg:/gluster/mnt7/brick
Brick44: s04-stg:/gluster/mnt8/brick
Brick45: s04-stg:/gluster/mnt9/brick
Brick46: s04-stg:/gluster/mnt10/brick
Brick47: s04-stg:/gluster/mnt11/brick
Brick48: s04-stg:/gluster/mnt12/brick
Brick49: s05-stg:/gluster/mnt1/brick
Brick50: s05-stg:/gluster/mnt2/brick
Brick51: s05-stg:/gluster/mnt3/brick
Brick52: s05-stg:/gluster/mnt4/brick
Brick53: s05-stg:/gluster/mnt5/brick
Brick54: s05-stg:/gluster/mnt6/brick
Brick55: s05-stg:/gluster/mnt7/brick
Brick56: s05-stg:/gluster/mnt8/brick
Brick57: s05-stg:/gluster/mnt9/brick
Brick58: s05-stg:/gluster/mnt10/brick
Brick59: s05-stg:/gluster/mnt11/brick
Brick60: s05-stg:/gluster/mnt12/brick
Brick61: s06-stg:/gluster/mnt1/brick
Brick62: s06-stg:/gluster/mnt2/brick
Brick63: s06-stg:/gluster/mnt3/brick
Brick64: s06-stg:/gluster/mnt4/brick
Brick65: s06-stg:/gluster/mnt5/brick
Brick66: s06-stg:/gluster/mnt6/brick
Brick67: s06-stg:/gluster/mnt7/brick
Brick68: s06-stg:/gluster/mnt8/brick
Brick69: s06-stg:/gluster/mnt9/brick
Brick70: s06-stg:/gluster/mnt10/brick
Brick71: s06-stg:/gluster/mnt11/brick
Brick72: s06-stg:/gluster/mnt12/brick
Options Reconfigured:
network.ping-timeout: 60
diagnostics.count-fop-hits: on
diagnostics.latency-measurement: on
cluster.server-quorum-type: server
features.default-soft-limit: 90
features.quota-deem-statfs: on
disperse.cpu-extensions: auto
network.inode-lru-limit: 50000
performance.md-cache-timeout: 600
performance.cache-invalidation: on
performance.stat-prefetch: on
features.cache-invalidation-timeout: 600
features.cache-invalidation: on
cluster.readdir-optimize: on
performance.parallel-readdir: off
performance.readdir-ahead: on
cluster.lookup-optimize: on
client.event-threads: 4
server.event-threads: 4
nfs.disable: on
transport.address-family: inet
cluster.quorum-type: auto
cluster.min-free-disk: 10
performance.client-io-threads: on
features.quota: on
features.inode-quota: on
features.bitrot: on
features.scrub: Active
cluster.brick-multiplex: on
cluster.server-quorum-ratio: 51%
If it can help, I paste here the output of “free -m” command executed on all the cluster nodes:
The result is almost the same on every nodes. In your opinion, the available RAM is enough to support data movement?
Thank you in advance.
Sorry for my long message, but I’m trying to notify you all available information.