Hi, Have a glusterfs 3.10.10 (tried 3.12.6 as well) volume on Ubuntu 16.04 with a 3 ssd tier where one ssd is bad. Status of volume: labgreenbin Gluster process TCP Port RDMA Port Online Pid ------------------------------------------------------------------------------ Hot Bricks: Brick labgfs81:/gfs/p1-tier/mount 49156 0 Y 4217 Brick labgfs51:/gfs/p1-tier/mount N/A N/A N N/A Brick labgfs11:/gfs/p1-tier/mount 49152 0 Y 643 Cold Bricks: Brick labgfs11:/gfs/p1/mount 49153 0 Y 312 Brick labgfs51:/gfs/p1/mount 49153 0 Y 295 Brick labgfs81:/gfs/p1/mount 49153 0 Y 307 Cannot find a command to replace the ssd so instead trying detach the tier but: # gluster volume tier labgreenbin detach start volume tier detach start: failed: Pre Validation failed on labgfs51. Found stopped brick labgfs51:/gfs/p1-tier/mount. Use force option to remove the offline brick Tier command failed ‘force’ results in Usage: # gluster volume tier labgreenbin detach start force Usage: volume tier <VOLNAME> status volume tier <VOLNAME> start [force] volume tier <VOLNAME> stop volume tier <VOLNAME> attach [<replica COUNT>] <NEW-BRICK>... [force] volume tier <VOLNAME> detach <start|stop|status|commit|[force]> So trying to remove the brick: # gluster v remove-brick labgreenbin replica 2 labgfs51:/gfs/p1-tier/mount force Removing brick(s) can result in data loss. Do you want to Continue? (y/n) y volume remove-brick commit force: failed: Removing brick from a Tier volume is not allowed Succeeded removing the tier with:
# gluster volume tier labgreenbin detach force but what does that mean? Will the content of tier get lost? How to solve this situation? /Curt |
_______________________________________________ Gluster-users mailing list Gluster-users@xxxxxxxxxxx http://lists.gluster.org/mailman/listinfo/gluster-users