IIUC you have to manually run a rebalance
gluster v rebalance datavol start
Diego
Il 01/12/2023 13:37, Shreyansh Shah ha scritto:
Hi,
We are running a gluster 9.3 volume with 47 bricks across 13 nodes. We
are noticing uneven data distribution across the bricks. Some bricks are
100% utilised whereas some are < 85%. Due to this, data write randomly
fails with error OSError(28, 'No space left on device') even when the
volume has sufficient space on other bricks. We expect the glusterfs
process to automatically sync and maintain equal data distribution
across the bricks.
This is the volume info:
root@nas-5:~# gluster v info
Volume Name: datavol
Type: Distribute
Volume ID: b4b52b4b-0ea0-4eeb-a359-5a0573d4f83a
Status: Started
Snapshot Count: 0
Number of Bricks: 47
Transport-type: tcp
Bricks:
Brick1: 10.132.2.101:/data/data
Brick2: 10.132.2.101:/data1/data
Brick3: 10.132.2.101:/data2/data
Brick4: 10.132.2.101:/data3/data
Brick5: 10.132.2.102:/data/data
Brick6: 10.132.2.102:/data1/data
Brick7: 10.132.2.102:/data2/data
Brick8: 10.132.2.102:/data3/data
Brick9: 10.132.2.103:/data/data
Brick10: 10.132.2.103:/data1/data
Brick11: 10.132.2.103:/data2/data
Brick12: 10.132.2.103:/data3/data
Brick13: 10.132.2.104:/data/data
Brick14: 10.132.2.104:/data3/data
Brick15: 10.132.2.105:/data1/data
Brick16: 10.132.2.105:/data2/data
Brick17: 10.132.2.106:/data/data
Brick18: 10.132.2.106:/data1/data
Brick19: 10.132.2.106:/data2/data
Brick20: 10.132.2.107:/data/data
Brick21: 10.132.2.107:/data1/data
Brick22: 10.132.2.107:/data2/data
Brick23: 10.132.2.108:/data/data
Brick24: 10.132.2.108:/data1/data
Brick25: 10.132.2.108:/data2/data
Brick26: 10.132.2.109:/data/data
Brick27: 10.132.2.109:/data1/data
Brick28: 10.132.2.109:/data2/data
Brick29: 10.132.2.110:/data/data
Brick30: 10.132.2.110:/data1/data
Brick31: 10.132.2.111:/data/data
Brick32: 10.132.2.111:/data1/data
Brick33: 10.132.2.111:/data2/data
Brick34: 10.132.2.112:/data/data
Brick35: 10.132.2.112:/data1/data
Brick36: 10.132.2.112:/data2/data
Brick37: 10.132.2.113:/data/data
Brick38: 10.132.2.113:/data1/data
Brick39: 10.132.2.113:/data2/data
Brick40: 10.132.2.108:/data3/data
Brick41: 10.132.2.107:/data3/data
Brick42: 10.132.2.106:/data3/data
Brick43: 10.132.2.105:/data3/data
Brick44: 10.132.2.110:/data2/data
Brick45: 10.132.2.105:/data/data
Brick46: 10.132.2.104:/data1/data
Brick47: 10.132.2.104:/data2/data
Options Reconfigured:
performance.client-io-threads: on
cluster.min-free-disk: 2%
performance.cache-refresh-timeout: 60
client.event-threads: 4
server.event-threads: 4
network.ping-timeout: 90
storage.health-check-interval: 60
storage.health-check-timeout: 60
performance.io-cache-size: 8GB
storage.fips-mode-rchecksum: on
transport.address-family: inet
nfs.disable: on
root@nas-5:~# glusterfs --version
glusterfs 9.3
Repository revision: git://git.gluster.org/glusterfs.git
<http://git.gluster.org/glusterfs.git>
Copyright (c) 2006-2016 Red Hat, Inc. <https://www.gluster.org/
<https://www.gluster.org/>>
GlusterFS comes with ABSOLUTELY NO WARRANTY.
It is licensed to you under your choice of the GNU Lesser
General Public License, version 3 or any later version (LGPLv3
or later), or the GNU General Public License, version 2 (GPLv2),
in all cases as published by the Free Software Foundation.
root@nas-5:~#
This is the disk usage distribution when the write failed:
*Node 1:*
/dev/bcache1 9.7T 8.5T 1.2T 88% /data
/dev/bcache3 3.9T 3.3T 563G 86% /data1
/dev/bcache0 3.9T 3.3T 560G 86% /data2
/dev/bcache2 3.9T 3.5T 395G 90% /data3
*Node 2:*
/dev/bcache0 9.7T 8.5T 1.2T 88% /data
/dev/bcache1 3.9T 3.5T 421G 90% /data1
/dev/bcache2 3.9T 3.6T 330G 92% /data2
/dev/bcache3 3.9T 3.4T 513G 87% /data3
*Node 3:*
/dev/bcache0 9.7T 8.8T 882G 92% /data
/dev/bcache1 3.9T 3.6T 335G 92% /data1
/dev/bcache2 3.9T 3.4T 532G 87% /data2
/dev/bcache3 3.9T 3.3T 564G 86% /data3
*Node 4:*
/dev/bcache0 9.7T 8.7T 982G 91% /data
/dev/bcache1 3.9T 3.5T 424G 90% /data1
/dev/bcache2 3.9T 3.4T 549G 87% /data2
/dev/bcache3 3.9T 3.6T 344G 92% /data3
*Node 5:*
/dev/bcache0 9.7T 8.5T 1.3T 88% /data
/dev/bcache1 3.9T 3.6T 288G 93% /data1
/dev/bcache2 3.9T 3.4T 470G 89% /data2
/dev/bcache3 9.9T 9.8T 101G 100% /data3
*Node 6:*
/dev/bcache0 9.7T 8.2T 1.5T 86% /data
/dev/bcache1 3.9T 3.4T 526G 87% /data1
/dev/bcache2 3.9T 3.5T 431G 90% /data2
/dev/bcache3 9.9T 8.9T 1.1T 90% /data3
*Node 7:*
/dev/bcache0 9.7T 8.9T 783G 93% /data
/dev/bcache1 3.9T 3.3T 561G 86% /data1
/dev/bcache2 3.9T 3.5T 360G 91% /data2
/dev/bcache3 9.9T 8.7T 1.2T 89% /data3
*Node 8:*
/dev/bcache0 9.7T 8.7T 994G 90% /data
/dev/bcache1 3.9T 3.3T 645G 84% /data1
/dev/bcache2 3.9T 3.4T 519G 87% /data2
/dev/bcache3 9.9T 9.0T 868G 92% /data3
*Node 9:*
/dev/bcache0 10T 8.6T 1.4T 87% /data
/dev/bcache1 8.0T 6.7T 1.4T 84% /data1
/dev/bcache2 8.0T 6.8T 1.3T 85% /data2
*Node 10:*
/dev/bcache0 10T 8.8T 1.3T 88% /data
/dev/bcache1 8.0T 6.6T 1.4T 83% /data1
/dev/bcache2 8.0T 7.0T 990G 88% /data2
*Node 11:*
/dev/bcache0 10T 8.1T 1.9T 82% /data
/dev/bcache1 10T 8.5T 1.5T 86% /data1
/dev/bcache2 10T 8.4T 1.6T 85% /data2
*Node 12:*
/dev/bcache0 10T 8.4T 1.6T 85% /data
/dev/bcache1 10T 8.4T 1.6T 85% /data1
/dev/bcache2 10T 8.2T 1.8T 83% /data2
*Node 13:*
/dev/bcache1 10T 8.7T 1.3T 88% /data1
/dev/bcache2 10T 8.8T 1.2T 88% /data2
/dev/bcache0 10T 8.6T 1.5T 86% /data
--
Regards,
Shreyansh Shah
AlphaGrep* Securities Pvt. Ltd.*
________
Community Meeting Calendar:
Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://meet.google.com/cpu-eiue-hvk
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
https://lists.gluster.org/mailman/listinfo/gluster-users
--
Diego Zuccato
DIFA - Dip. di Fisica e Astronomia
Servizi Informatici
Alma Mater Studiorum - Università di Bologna
V.le Berti-Pichat 6/2 - 40127 Bologna - Italy
tel.: +39 051 20 95786
________
Community Meeting Calendar:
Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://meet.google.com/cpu-eiue-hvk
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
https://lists.gluster.org/mailman/listinfo/gluster-users