By setting cluster.min-free-disk (defaults to 10%) you can, at
least, ensure that your new bricks are utilized as needed to
prevent over filling your smaller bricks.
Hi,
We have a 2 node, distributed replicated setup (11
bricks on each node). Each of these bricks are 6TB in size.
node_A:/brick1 replicates node_B:/brick1
node_A:/brick2 replicates node_B:/brick2
node_A:/brick3 replicates node_B:/brick3
…
…
node_A:/brick11 replicates node_B:/brick11
We recently added 5 more bricks to make it 16 bricks
on each node in total. Each of these new bricks are 8TB in
size.
We completed a full rebalance operation (status says
“completed”).
However the end result is somewhat unexpected:
/dev/sdl1
7.3T 2.2T 5.2T 29%
/dev/sdk1
7.3T 2.0T 5.3T 28%
/dev/sdj1
7.3T 2.0T 5.3T 28%
/dev/sdn1
7.3T 2.2T 5.2T 30%
/dev/sdp1
7.3T 2.2T 5.2T 30%
/dev/sdc1 5.5T
2.3T 3.2T 42%
/dev/sdf1 5.5T
2.3T 3.2T 43%
/dev/sdo1 5.5T
2.3T 3.2T 42%
/dev/sda1 5.5T
2.3T 3.2T 43%
/dev/sdi1 5.5T
2.3T 3.2T 42%
/dev/sdh1 5.5T
2.3T 3.2T 43%
/dev/sde1 5.5T
2.3T 3.2T 42%
/dev/sdb1 5.5T
2.3T 3.2T 42%
/dev/sdm1 5.5T
2.3T 3.2T 42%
/dev/sdg1 5.5T
2.3T 3.2T 42%
/dev/sdd1 5.5T
2.3T 3.2T 42%
The df output in bold are the new
8TB drives.
Was I wrong to expect the % usage to be roughly
equal? Is there some parameter I need to tweak to make
rebalance account for disk sizes properly?
I’m using Gluster 3.8 on Ubuntu.
Thanks,
Jackie
The
information in this email is confidential and may be legally
privileged. It is intended solely for the addressee. Access to
this email by anyone else is unauthorized. If you are not the
intended recipient, any disclosure, copying, distribution or
any action taken or omitted to be taken in reliance on it, is
prohibited and may be unlawful.
_______________________________________________
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
http://www.gluster.org/mailman/listinfo/gluster-users