Re: Fwd: Gluster Volume Replication using 2 AWS instances on Autoscaling

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



volume heal <vol-name> full

I meant (notice the full at the end)
(and very much sorry for the silly footer in my last post! :-/)

Sorry Vijay don't understand what you mean,
I had a wrong filesystem under one brick lately 
(sort of "forgot" to mount the right disc before creating the gluster volume)
I did the steps: remove brick, change brick, add brick, heal full 
and the new brick got populated quickly
All fine…

Best regards
Bernhard

On Mar 13, 2014, at 6:11 PM, Vijay Bellur <vbellur@xxxxxxxxxx> wrote:
> 
> Begin forwarded message:
> 
> *From: *bernhard glomm <bernhard.glomm@xxxxxxxxxxx
> <mailto:bernhard.glomm@xxxxxxxxxxx>>
> *Subject: **Re:  Gluster Volume Replication using 2 AWS
> instances on Autoscaling*
> *Date: *March 13, 2014 6:08:10 PM GMT+01:00
> *To: *Vijay Bellur <vbellur@xxxxxxxxxx <mailto:vbellur@xxxxxxxxxx>>
> 
> ??? I thought  replace-brick was not recommended at the moment
> in 3.4.2 on a replica 2 volume I use successfully:

replace-brick is not recommended for data migration. commit force just performs a volume topology update and does not perform any data migration.

-Vijay

> 
> volume remove-brick <vol-name>  replica 1 <brick-name>  force
> # replace the old brick with the new one, mount another disk or what
> ever, than
> volume add-brick <vol-name> replica 2 <brick-name> force
> volume heal <vol-name>
> 
> hth
> 
> Bernhard
> 
> 
> On Mar 13, 2014, at 5:48 PM, Vijay Bellur <vbellur@xxxxxxxxxx
> <mailto:vbellur@xxxxxxxxxx>> wrote:
> 
> On 03/13/2014 09:18 AM, Alejandro Planas wrote:
>> Hello,
>> 
>> We have 2 AWS instances, 1 brick on each instance, one replicated volume
>> among both instances. When one of the instances fails completely and
>> autoscaling replaces it with a new one, we are having issues recreating
>> the replicated volume again.
>> 
>> Can anyone provide some light on the gluster commands required to
>> include this new replacement instance (with one brick) as a member of
>> the replicated volume?
>> 
> 
> You can probably use:
> 
> volume replace-brick <volname> <old-brick> <new-brick> commit force
> 
> This will remove the old-brick from the volume and bring in new-brick to
> the volume. self-healing can then synchronize data to the new brick.
> 
> Regards,
> Vijay
> 
> _______________________________________________
> Gluster-users mailing list
> Gluster-users@xxxxxxxxxxx <mailto:Gluster-users@xxxxxxxxxxx>
> http://supercolony.gluster.org/mailman/listinfo/gluster-users
> 
> 
> 
> 
> _______________________________________________
> Gluster-users mailing list
> Gluster-users@xxxxxxxxxxx
> http://supercolony.gluster.org/mailman/listinfo/gluster-users
> 


Attachment: signature.asc
Description: Message signed with OpenPGP using GPGMail

_______________________________________________
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
http://supercolony.gluster.org/mailman/listinfo/gluster-users

[Index of Archives]     [Gluster Development]     [Linux Filesytems Development]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux