remove-brick question

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



I inherited a system with a wide mix of array sizes (no replication) in
3.2.2, and wanted to drain data from a failing array.

I upgraded to 3.3.2, and began a
gluster volume remove-brick scratch "gfs-node01:/sda" start

After some time I got this:
gluster volume remove-brick scratch "gfs-node01:/sda" status
Node Rebalanced-files          size       scanned      failures        
status
 ---------      -----------   -----------   -----------   -----------  
------------
localhost                0        0Bytes             0             0   
not started
gfs-node06                0        0Bytes             0             0   
not started
gfs-node03                0        0Bytes             0             0   
not started
gfs-node05                0        0Bytes             0             0   
not started
gfs-node01       2257394624         2.8TB       5161640        208878     
completed

Two things jump instantly to mind:
1) The number of failures is rather large
2) A _different_ disk seems to have been _partially_ drained.
/dev/sda              2.8T  2.7T   12G 100% /sda
/dev/sdb              2.8T  769G  2.0T  28% /sdb
/dev/sdc              2.8T  2.1T  698G  75% /sdc
/dev/sdd              2.8T  2.2T  589G  79% /sdd



When I mount the system it is read-only (another problem I want to fix
ASAP) so I'm pretty sure the failures aren't due to users changing the
system underneath me.

Thanks for any pointers.

James Bellinger



[Index of Archives]     [Gluster Development]     [Linux Filesytems Development]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux