Re: building bricks in AWS off an EBS snapshot

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Yes,  this is what I do, since otherwise I get the famous "already part of a volume " error.
After extensive googling I found that the following needs to be done to clean up any past gluster traces from a brick:

setfattr -x trusted.glusterfs.volume-id <brick-path>
setfattr -x trusted.gfid <brick-path>
rm -r <brick-path>/.glusterfs

Feel free to tell me otherwise, I will take all help I can get ;-)


-----Original Message-----
From: Vijay Bellur [mailto:vbellur@xxxxxxxxxx] 
Sent: Friday, October 23, 2015 2:05 PM
To: Mayzel, Eugene; gluster-users@xxxxxxxxxxx
Subject: Re:  building bricks in AWS off an EBS snapshot

On Tuesday 20 October 2015 01:41 AM, Mayzel, Eugene wrote:
> Hello,
>
> If anyone could help me with a strange issue I am experiencing:
>
> I run an EC2 stack, say v1, that has a gluster with three one brick 
> nodes in the replication mode.
>
> I make an ebs volume level snapshot of one of the bricks (not the 
> gluster level snapshot)
>
> Now I want to spin a new EC2 stack, v2, using that single snapshot 
> from
> v1 to build all three bricks in v2.
>
> Ebs volumes are successfully built, and can see that all data is 
> there, on all three bricks, I clean the bricks using the setfattr and 
> rem ./glusterfs routine,
>

Are you referring to <brick-path>/.glusterfs here? If yes, any reason why that is being cleaned up?

Regards,
Vijay


-----
No virus found in this message.
Checked by AVG - www.avg.com
Version: 2015.0.6172 / Virus Database: 4450/10875 - Release Date: 10/23/15


The information contained in this communication is intended for the use
of the designated recipients named above. If the reader of this 
communication is not the intended recipient, you are hereby notified
that you have received this communication in error, and that any review,
dissemination, distribution or copying of this communication is strictly
prohibited. If you have received this communication in error, please 
notify The Associated Press immediately by telephone at +1-212-621-1898 
and delete this email. Thank you.
[IP_US_DISC]

msk dccc60c6d2c3a6438f0cf467d9a4938

_______________________________________________
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
http://www.gluster.org/mailman/listinfo/gluster-users



[Index of Archives]     [Gluster Development]     [Linux Filesytems Development]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux