Re: how to detach the peer off line, which carries data

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



I am sorry that I get you misunderstanding.
Actually I can stop the volume and even delete it. What I really want to express is that  the volume does not allow to be stopped and deleted as some virtual machines running on it.
In the case above, P1 has crashed and I have to reinstall the system for P1, so P1 lost the all the information about the volume and other peers mentioned above. When P1 comes back, I want to probe it to the cluster P2/P3 belongs to, and recover brick b1 and b2. So, what should I do? 

On Sat, Apr 30, 2016 at 11:04 PM, Atin Mukherjee <atin.mukherjee83@xxxxxxxxx> wrote:

-Atin
Sent from one plus one
On 30-Apr-2016 8:20 PM, "袁仲" <yzlyourself@xxxxxxxxx> wrote:
>
> I have a scenes like this:
>
>
> I have 3 peers.  eg. P1, P2 and P3, and each of them has 2 bricks, 
>
> e.g. P1 have 2 bricks, b1 and b2. 
>
>        P2 has 2 bricks, b3 and b4. 
>
>        P3 has 2 bricks, b5 and b6.
>
> Based that above, I create  a volume (afr volume) like this: 
>
> b1 and b3 make up a replicate subvolume   rep-sub1
>
> b4 and b5  make up a replicate subvolume  rep-sub2
>
> b2 and b6  make up a replicate sub volume rep-sub3
>
> And rep-sub1,2,3 make up a distribute volume, AND start the volume.
>
>
> now, p1 has a crash or it just disconnected. I want to detach P1 and the volume has started absolutely can’t be stop or deleted. so I did this:  gluster peer detach host-P1.

This is destructive, detaching a peer hosting bricks is definitely needs to be blocked otherwise technically you loose the volume as Gluster is a distributed file system. Have you tried to analyze why the node has crashed? And is there any specific reason why do you want to stop the volume as replication gives you the high availability and your volume would still be accessible.  Even if you want to stop the volume, try the following:

1. Restart glusterd, if it still fails go to 2nd step
2. Go for a peer replacement procedure

Otherwise, you may try out volume stop force, it may work too.

>
> but it does not work, the reason is that  P1 has bricks on it according to the glusterfs error message printed on shell. 
>
>
> so, I comment out  the code leaded the error above, and try again. I it really works. Its amazing. And the VM runs on the volume is all right. 
>
> BUT, this leads a big problem that  the glusterd restart failed. Both on P2 and P3, but when I remove the stuff below /var/lib/glusterfs/vols/, it restarts success. so, I wander that there is something about volume.
>
>
> my question is, 
>
> if there is a method to detach  P1 in the scenes above.
>
> or what issue i will meet if I make it works through modify the code source. 
>
>
> thanks so much.
>
>
> _______________________________________________
> Gluster-users mailing list
> Gluster-users@xxxxxxxxxxx
> http://www.gluster.org/mailman/listinfo/gluster-users


_______________________________________________
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
http://www.gluster.org/mailman/listinfo/gluster-users

[Index of Archives]     [Gluster Development]     [Linux Filesytems Development]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux