Re: [External] Replica 3 - how to replace failed node (peer)

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Martin,

The reset-brick command is introduced in 3.9.0 and not present in 3.7.6. You can try using the same replace-brick command with the force option even if you want to use the same name for the brick being replaced.
3.7.6 is EOLed long back and glusterfs-6 is the latest version with lots of improvements, bug fixes and new features. The release schedule can be found at [1]. Upgrading to one of the maintained branch is highly recommended.

On Wed, Apr 10, 2019 at 4:14 PM Martin Toth <snowmailer@xxxxxxxxx> wrote:
I’ve read this documentation but step 4 is really unclear to me. I don’t understand related mkdir/rmdir/setfattr and so on.

Step 4:
Using the gluster volume fuse mount (In this example: /mnt/r2) set up metadata so that data will be synced to new brick (In this case it is from Server1:/home/gfs/r2_1 to Server1:/home/gfs/r2_5)

Why should I change trusted.non-existent-key on this volume? 
It is even more confusing because other mentioned howtos does not contain this step at all.
Those steps were needed in the older releases to set some metadata on the good bricks so that heal should not happen from the replaced brick to good bricks, which can lead to data loss. Since you are on 3.7.6, we have automated all these steps for you in that branch. You just need to run the replace-brick command, which will take care of all those things.

[1] https://www.gluster.org/release-schedule/

Regards,
Karthik

BR, 
Martin

On 10 Apr 2019, at 11:54, Davide Obbi <davide.obbi@xxxxxxxxxxx> wrote:


On Wed, Apr 10, 2019 at 11:42 AM Martin Toth <snowmailer@xxxxxxxxx> wrote:
Hi all,

I am running replica 3 gluster with 3 bricks. One of my servers failed - all disks are showing errors and raid is in fault state.

Type: Replicate
Volume ID: 41d5c283-3a74-4af8-a55d-924447bfa59a
Status: Started
Number of Bricks: 1 x 3 = 3
Transport-type: tcp
Bricks:
Brick1: node1.san:/tank/gluster/gv0imagestore/brick1
Brick2: node2.san:/tank/gluster/gv0imagestore/brick1 <— this brick is down
Brick3: node3.san:/tank/gluster/gv0imagestore/brick1

So one of my bricks is totally failed (node2). It went down and all data are lost (failed raid on node2). Now I am running only two bricks on 2 servers out from 3.
This is really critical problem for us, we can lost all data. I want to add new disks to node2, create new raid array on them and try to replace failed brick on this node.

What is the procedure of replacing Brick2 on node2, can someone advice? I can’t find anything relevant in documentation.

Thanks in advance,
Martin
_______________________________________________
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
https://lists.gluster.org/mailman/listinfo/gluster-users


--
Davide Obbi
Senior System Administrator

Booking.com B.V.
Vijzelstraat 66-80 Amsterdam 1017HL Netherlands
Direct +31207031558
Booking.com
Empowering people to experience the world since 1996
43 languages, 214+ offices worldwide, 141,000+ global destinations, 29 million reported listings 
Subsidiary of Booking Holdings Inc. (NASDAQ: BKNG)

_______________________________________________
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
https://lists.gluster.org/mailman/listinfo/gluster-users
_______________________________________________
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
https://lists.gluster.org/mailman/listinfo/gluster-users

[Index of Archives]     [Gluster Development]     [Linux Filesytems Development]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux