Joe, Thanks for that, that was educational. Gluster docs claim that since 3.7, DHT hash ranges are weighted based on brick sizes by default: $ gluster volume get <vol cluster.weighted-rebalance Option Value ------ ----- cluster.weighted-rebalance on When running rebalance with force, I see this in the rebalance log: ... [2016-10-11 16:38:37.655144] I [MSGID: 109045] [dht-selfheal.c:1751:dht_fix_layout_of_directory] 0-cronut-dht: subvolume 10 (cronut-replicate-10): 5721127 chunks [2016-10-11 16:38:37.655154] I [MSGID: 109045] [dht-selfheal.c:1751:dht_fix_layout_of_directory] 0-cronut-dht: subvolume 11 (cronut-replicate-11): 7628846 chunks … subvolume >=11 are 8TB, subvolume <= 10 is are 6TB. Do you think it is possible to even out usage on all bricks by % utilized now? This would be the case if gluster rebalanced simply by what the scaled DHT says, including all required data migrations? It would be preferable for us to avoid having to depend on cluster.min-free-disk to manage overflow later on - as this introduces one extra read of the link followed by the actual IOP. Thanks, Jackie
The information in this email is confidential and may be legally privileged. It is intended solely for the addressee. Access to this email by anyone else is unauthorized. If you are not the intended recipient, any disclosure, copying, distribution or any action taken or omitted to be taken in reliance on it, is prohibited and may be unlawful. |
_______________________________________________ Gluster-users mailing list Gluster-users@xxxxxxxxxxx http://www.gluster.org/mailman/listinfo/gluster-users