Bricks has different disk usage

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello,


I currently have a gluster (glusterfs 3.7.15) volume (distributed-replicated) with 4 bricks (2 x 2 = 4) and the bricks themselves seem to have different usage. I had the impression that they should all be equal?


So all bricks has a 250GB disk. 1 set of bricks (2 bricks) has 91% usage, while the other set (2 bricks) has 16% usage.  


So originally, the volume (250GB total) started out with just a set of 2 bricks.  Then 2 more bricks were added to expand the volume to 500GB.  A rebalance was done after adding the new bricks as well as fix-layout. 


Really not sure what to make of this.


Details below:


Volume Name: glustervol0
Type: Distributed-Replicate
Volume ID: 243e0652-5b95-4f63-bcf6-f7c60a75ff83
Status: Started
Number of Bricks: 2 x 2 = 4
Transport-type: tcp
Bricks:
Brick1: dc1-x-smb-clust-01-1:/vol1/brick1
Brick2: dc1-x-smb-clust-01-2:/vol1/brick1
Brick3: dc1-x-smb-clust-01-1:/vol2/brick2
Brick4: dc1-x-smb-clust-01-2:/vol2/brick2

/dev/xvdb1                         250G  226G   25G  91% /vol1
/dev/xvdc1                         250G   40G  211G  16% /vol2


Cheers,


--------------------------------------------------
Kahlil Talledo

_______________________________________________
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
https://lists.gluster.org/mailman/listinfo/gluster-users

[Index of Archives]     [Gluster Development]     [Linux Filesytems Development]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux