how to recover a accidentally delete brick directory?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



hi Shwetha,


 command "gluster volume start eccp_glance force" on the other node gives following:


with cli.log


on the damaged node?gluster volume start eccp_glance force gives:





---------------------------
???
010 5881 3749
????? ?????????





On 2013?11?28?, at ??4:50, shwetha <spandura at redhat.com> wrote:

> 1) create the brick directory "/opt/gluster_data/eccp_glance" on the nodes where you deleted the directories. 
> 
> 2) From any of the storage node execute : 
> gluster volume start <volume_name> force  : To restart the brick process
> gluster volume status <volume_name> : Check all the brick process are started. 
> gluster volume heal <volume_name> full : To trigger self-heal on to the removed bricks.
> -Shwetha
> 
> On 11/28/2013 02:09 PM, ??? wrote:
>> hi all,
>> 
>> I accidentally removed the brick directory of a volume on one node, the replica for this volume is 2.
>> 
>> now the situation is , there is no corresponding glusterfsd process  on this node, and 'glusterfs volume status' shows that the brick is offline, like this:
>> Brick 192.168.64.11:/opt/gluster_data/eccp_glance    N/A    Y    2513
>> Brick 192.168.64.12:/opt/gluster_data/eccp_glance    49161    Y    2542
>> Brick 192.168.64.17:/opt/gluster_data/eccp_glance    49164    Y    2537
>> Brick 192.168.64.18:/opt/gluster_data/eccp_glance    49154    Y    4978
>> Brick 192.168.64.29:/opt/gluster_data/eccp_glance    N/A    N    N/A
>> Brick 192.168.64.30:/opt/gluster_data/eccp_glance    49154    Y    4072
>> Brick 192.168.64.25:/opt/gluster_data/eccp_glance    49155    Y    11975
>> Brick 192.168.64.26:/opt/gluster_data/eccp_glance    49155    Y    17947
>> Brick 192.168.64.13:/opt/gluster_data/eccp_glance    49154    Y    26045
>> Brick 192.168.64.14:/opt/gluster_data/eccp_glance    49154    Y    22143
>> 
>> 
>> so are there ways to bring this brick back to normal?
>> 
>> thanks! 
>> 
>> 
>> -----------------------------------------------------------------
>> ??? 
>> ????? ?????????
>> 
>> 
>> 
>> _______________________________________________
>> Gluster-users mailing list
>> Gluster-users at gluster.org
>> http://supercolony.gluster.org/mailman/listinfo/gluster-users
> 
> 

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://supercolony.gluster.org/pipermail/gluster-users/attachments/20131128/4eaa2c31/attachment-0001.html>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: Pasted Graphic.tiff
Type: image/tiff
Size: 24310 bytes
Desc: not available
URL: <http://supercolony.gluster.org/pipermail/gluster-users/attachments/20131128/4eaa2c31/attachment-0003.tiff>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: Pasted Graphic 1.tiff
Type: image/tiff
Size: 348966 bytes
Desc: not available
URL: <http://supercolony.gluster.org/pipermail/gluster-users/attachments/20131128/4eaa2c31/attachment-0004.tiff>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: Pasted Graphic 2.tiff
Type: image/tiff
Size: 115982 bytes
Desc: not available
URL: <http://supercolony.gluster.org/pipermail/gluster-users/attachments/20131128/4eaa2c31/attachment-0005.tiff>


[Index of Archives]     [Gluster Development]     [Linux Filesytems Development]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux