Hi! We have "problems" with size of volumes after replace three bricks, our farm of gluster it's configured with Stripped-Distributed-Replicate scheema with 8 bricks of 2TB. Brick 1 /gfs Used 70% Brick 2 /gfs Used 47% Brick 3 /gfs Used 60% Brick 4 /gfs Used 52% Brick 5 /gfs Used 25% Brick 6 /gfs Used 16% Brick 7 /gfs Used 15% Brick 8 /gfs Used 52% The new bricks after replace process without problems are, 5, 6 and 7. Our question is...it's normal? Have we to do something? Rebalance volume it's secure and necessary? Version of gluster 3.3.0 with Infiniband Network. Thanks!