On 06/25/2015 03:07 AM, John Gardeniers wrote: > No takers on this one? > > On 22/06/15 14:37, John Gardeniers wrote: >> Until last weekend we had a simple 1x2 replicated volume, consisting >> of a single brick on each peer. After a drive failure screwed the >> brick on one peer we decided to create a new peer and swap the bricks. >> Running "gluster volume replace-brick gluster-rhev >> dead_peer:/gluster_brick_1 new_peer:/gluster_brick_1 commit force". Did replace brick succeeded? Ideally if you run replace brick commit force, that can result into data loss until and unless you explicitly take care of it. >> >> After trying for some time and not wishing to rely on a single peer we >> added kari as an additional replica with "gluster volume add-brick >> gluster-rhev replica 3 new_peer:/gluster_brick_1 force". >> >> Can we now *safely* remove the dead brick and revert back to replica 2? If the earlier replace brick didn't happen, then you can go for remove-brick start followed by commit once the status is completed. But double check the data as well. >> >> regards, >> John >> >> _______________________________________________ >> Gluster-users mailing list >> Gluster-users@xxxxxxxxxxx >> http://www.gluster.org/mailman/listinfo/gluster-users >> >> ______________________________________________________________________ >> This email has been scanned by the Symantec Email Security.cloud service. >> For more information please visit http://www.symanteccloud.com >> ______________________________________________________________________ > > _______________________________________________ > Gluster-users mailing list > Gluster-users@xxxxxxxxxxx > http://www.gluster.org/mailman/listinfo/gluster-users > -- ~Atin _______________________________________________ Gluster-users mailing list Gluster-users@xxxxxxxxxxx http://www.gluster.org/mailman/listinfo/gluster-users