On 01/12/2016 01:48 PM, Patrick Kaiser
wrote:
Hi,
thanks for your feedback. I've figured out, that a brick
was not working anymore. Only after restarting the whole
server with the failed brick, the
volume now has identical sizes.
Thanks for the confirmation. Thought there could be some bug in
gluster :-)
Pranith
Thanks
Mit freundlichen Grüßen
Patrick Kaiser
VNC - Virtual Network Consult GmbH
On 01/08/2016 06:30 PM, Patrick
Kaiser wrote:
hi,
I am running a distributed replicated gluster fs setup
with 4 nodes.
currently i have no problems but i was wondering when i
am running gluster volume status
and seeing different free disk space on every node.
I am wondering if I should not have the same free and
used size on gluster00 and gluster01
and also on gluster02 and gluster03 (as they are the
replicated ones)
It doesn't look right to me either. Do you have any self-heals
that need to happen on the first replica subvolume? "gluster
volume heal <volname> info"
Pranith
-
root@gluster0:~#
gluster volume status GV01 detail
-
Status of
volume: GV01
-
------------------------------------------------------------------------------
-
Brick
: Brick
gluster00.storage.domain:/brick/gv01
-
Port
: 49163
-
Online
: Y
-
Pid
: 3631
-
File System
: xfs
-
Device
: /dev/mapper/vg--gluster0-DATA
-
Mount
Options :
rw,relatime,attr2,delaylog,noquota
-
Inode Size
: 256
-
Disk Space
Free : 5.7TB
-
Total Disk
Space : 13.6TB
-
Inode Count
: 2923388928
-
Free Inodes
: 2922850330
-
------------------------------------------------------------------------------
-
Brick
: Brick
gluster01.storage.domain:/brick/gv01
-
Port
: 49163
-
Online
: Y
-
Pid
: 2976
-
File System
: xfs
-
Device
: /dev/mapper/vg--gluster1-DATA
-
Mount
Options :
rw,relatime,attr2,delaylog,noquota
-
Inode Size
: 256
-
Disk Space
Free : 4.4TB
-
Total Disk
Space : 13.6TB
-
Inode Count
: 2923388928
-
Free Inodes
: 2922826116
-
------------------------------------------------------------------------------
-
Brick
: Brick
gluster02.storage.domain:/brick/gv01
-
Port
: 49163
-
Online
: Y
-
Pid
: 3051
-
File System
: xfs
-
Device
: /dev/mapper/vg--gluster2-DATA
-
Mount
Options :
rw,relatime,attr2,delaylog,noquota
-
Inode Size
: 256
-
Disk Space
Free : 6.4TB
-
Total Disk
Space : 13.6TB
-
Inode Count
: 2923388928
-
Free Inodes
: 2922851020
-
------------------------------------------------------------------------------
-
Brick
: Brick
gluster03.storage.domain:/brick/gv01
-
Port
: N/A
-
Online
: N
-
Pid
: 29822
-
File System
: xfs
-
Device
: /dev/mapper/vg--gluster3-DATA
-
Mount
Options :
rw,relatime,attr2,delaylog,noquota
-
Inode Size
: 256
-
Disk Space
Free : 6.2TB
-
Total Disk
Space : 13.6TB
-
Inode Count
: 2923388928
-
Free Inodes
: 2922847631
friendly
regards,
Patrick
_______________________________________________
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
http://www.gluster.org/mailman/listinfo/gluster-users
|
_______________________________________________
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
http://www.gluster.org/mailman/listinfo/gluster-users