On Thu, Apr 11, 2019 at 10:23 AM Karthik Subrahmanya <ksubrahm@xxxxxxxxxx> wrote:
Hi Strahil,Can you give us some more insights on- the volume configuration you were using?- why you wanted to replace your brick?- which brick(s) you tried replacing?
- if you remember the commands/steps that you followed, please give that as well.
- what problem(s) did you face?
Regards,KarthikOn Thu, Apr 11, 2019 at 10:14 AM Strahil <hunter86_bg@xxxxxxxxx> wrote:Hi Karthnik,
I used only once the brick replace function when I wanted to change my Arbiter (v3.12.15 in oVirt 4.2.7) and it was a complete disaster.
Most probably I should have stopped the source arbiter before doing that, but the docs didn't mention it.Thus I always use reset-brick, as it never let me down.
Best Regards,
Strahil NikolovOn Apr 11, 2019 07:34, Karthik Subrahmanya <ksubrahm@xxxxxxxxxx> wrote:Hi Strahil,Thank you for sharing your experience with reset-brick option.Since he is using the gluster version 3.7.6, we do not have the reset-brick [1] option implemented there. It is introduced in 3.9.0. He has to go with replace-brick with the force option if he wants to use the same path & name for the new brick.Yes, it is recommended to have the new brick to be of the same size as that of the other bricks.Regards,KarthikOn Wed, Apr 10, 2019 at 10:31 PM Strahil <hunter86_bg@xxxxxxxxx> wrote:I have used reset-brick - but I have just changed the brick layout.
You may give it a try, but I guess you need your new brick to have same amount of space (or more).
Maybe someone more experienced should share a more sound solution.
Best Regards,
Strahil NikolovOn Apr 10, 2019 12:42, Martin Toth <snowmailer@xxxxxxxxx> wrote:
>
> Hi all,
>
> I am running replica 3 gluster with 3 bricks. One of my servers failed - all disks are showing errors and raid is in fault state.
>
> Type: Replicate
> Volume ID: 41d5c283-3a74-4af8-a55d-924447bfa59a
> Status: Started
> Number of Bricks: 1 x 3 = 3
> Transport-type: tcp
> Bricks:
> Brick1: node1.san:/tank/gluster/gv0imagestore/brick1
> Brick2: node2.san:/tank/gluster/gv0imagestore/brick1 <— this brick is down
> Brick3: node3.san:/tank/gluster/gv0imagestore/brick1
>
> So one of my bricks is totally failed (node2). It went down and all data are lost (failed raid on node2). Now I am running only two bricks on 2 servers out from 3.
> This is really critical problem for us, we can lost all data. I want to add new disks to node2, create new raid array on them and try to replace failed brick on this node.
>
> What is the procedure of replacing Brick2 on node2, can someone advice? I can’t find anything relevant in documentation.
>
> Thanks in advance,
> Martin
> _______________________________________________
> Gluster-users mailing list
> Gluster-users@xxxxxxxxxxx
> https://lists.gluster.org/mailman/listinfo/gluster-users
_______________________________________________
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
https://lists.gluster.org/mailman/listinfo/gluster-users
_______________________________________________ Gluster-devel mailing list Gluster-devel@xxxxxxxxxxx https://lists.gluster.org/mailman/listinfo/gluster-devel