Hi Abhishek,
How are you connecting two board, and how are you removing it manually that need to know because if you are removing your 2nd board from the cluster (abrupt shutdown) then you can't perform remove brick operation in 2nd node from first node and its happening successfully in your case. could you ensure your network connection once again while removing and bringing back your node again.
Thanks,
Gaurav
From: "ABHISHEK PALIWAL" <abhishpaliwal@xxxxxxxxx>
To: "Gaurav Garg" <ggarg@xxxxxxxxxx>
Cc: gluster-users@xxxxxxxxxxx
Sent: Friday, February 19, 2016 3:36:21 PM
Subject: Re: Issue in Adding/Removing the gluster node
1. Here, I removed the board manually here but this time it works fine
[2016-02-18 10:03:40.601472] : volume remove-brick c_glusterfs replica 1 10.32.1.144:/opt/lvmdir/c2/brick force : SUCCESS
[2016-02-18 10:03:40.885973] : peer detach 10.32.1.144 : SUCCESS
2. Here, I attached the board this time its works fine in add-bricks
2016-02-18 10:03:42.065038] : peer probe 10.32.1.144 : SUCCESS
[2016-02-18 10:03:44.563546] : volume add-brick c_glusterfs replica 2 10.32.1.144:/opt/lvmdir/c2/brick force : SUCCESS
3.Here, again I removed the board this time failed occur
[2016-02-18 10:37:02.816089] : volume remove-brick c_glusterfs replica 1 10.32.1.144:/opt/lvmdir/c2/brick force : FAILED : Incorrect brick 10.32.1.144:/opt
/lvmdir/c2/brick for volume c_glusterfs
but here board is not reachable.
why this inconsistency is there while doing the same step multiple time.
Regards,
Abhishek
--
Regards
Abhishek Paliwal
To: "Gaurav Garg" <ggarg@xxxxxxxxxx>
Cc: gluster-users@xxxxxxxxxxx
Sent: Friday, February 19, 2016 3:36:21 PM
Subject: Re: Issue in Adding/Removing the gluster node
Hi Gaurav,
Thanks for reply
1. Here, I removed the board manually here but this time it works fine
[2016-02-18 10:03:40.885973] : peer detach 10.32.1.144 : SUCCESS
Yes this time board is reachable but how? don't know because board is detached.
[2016-02-18 10:03:44.563546] : volume add-brick c_glusterfs replica 2 10.32.1.144:/opt/lvmdir/c2/brick force : SUCCESS
/lvmdir/c2/brick for volume c_glusterfs
Hope you are getting my point.
On Fri, Feb 19, 2016 at 3:25 PM, Gaurav Garg <ggarg@xxxxxxxxxx> wrote:
Abhishek,
when sometime its working fine means 2nd board network connection is reachable to first node. you can conform this by executing same #gluster peer status command.
Thanks,
Gaurav
----- Original Message -----
From: "ABHISHEK PALIWAL" <abhishpaliwal@xxxxxxxxx>
To: "Gaurav Garg" <ggarg@xxxxxxxxxx>
Cc: gluster-users@xxxxxxxxxxx
Sent: Friday, February 19, 2016 3:12:22 PM
Subject: Re: Issue in Adding/Removing the gluster node
Hi Gaurav,
Yes, you are right actually I am force fully detaching the node from the
slave and when we removed the board it disconnected from the another board.
but my question is I am doing this process multiple time some time it works
fine but some time it gave these errors.
you can see the following logs from cmd_history.log file
[2016-02-18 10:03:34.497996] : volume set c_glusterfs nfs.disable on :
SUCCESS
[2016-02-18 10:03:34.915036] : volume start c_glusterfs force : SUCCESS
[2016-02-18 10:03:40.250326] : volume status : SUCCESS
[2016-02-18 10:03:40.273275] : volume status : SUCCESS
[2016-02-18 10:03:40.601472] : volume remove-brick c_glusterfs replica 1
10.32.1.144:/opt/lvmdir/c2/brick force : SUCCESS
[2016-02-18 10:03:40.885973] : peer detach 10.32.1.144 : SUCCESS
[2016-02-18 10:03:42.065038] : peer probe 10.32.1.144 : SUCCESS
[2016-02-18 10:03:44.563546] : volume add-brick c_glusterfs replica 2
10.32.1.144:/opt/lvmdir/c2/brick force : SUCCESS
[2016-02-18 10:30:53.297415] : volume status : SUCCESS
[2016-02-18 10:30:53.313096] : volume status : SUCCESS
[2016-02-18 10:37:02.748714] : volume status : SUCCESS
[2016-02-18 10:37:02.762091] : volume status : SUCCESS
[2016-02-18 10:37:02.816089] : volume remove-brick c_glusterfs replica 1
10.32.1.144:/opt/lvmdir/c2/brick force : FAILED : Incorrect brick
10.32.1.144:/opt/lvmdir/c2/brick for volume c_glusterfs
On Fri, Feb 19, 2016 at 3:05 PM, Gaurav Garg <ggarg@xxxxxxxxxx> wrote:
> Hi Abhishek,
>
> Seems your peer 10.32.1.144 have disconnected while doing remove brick.
> see the below logs in glusterd:
>
> [2016-02-18 10:37:02.816009] E [MSGID: 106256]
> [glusterd-brick-ops.c:1047:__glusterd_handle_remove_brick] 0-management:
> Incorrect brick 10.32.1.144:/opt/lvmdir/c2/brick for volume c_glusterfs
> [Invalid argument]
> [2016-02-18 10:37:02.816061] E [MSGID: 106265]
> [glusterd-brick-ops.c:1088:__glusterd_handle_remove_brick] 0-management:
> Incorrect brick 10.32.1.144:/opt/lvmdir/c2/brick for volume c_glusterfs
> The message "I [MSGID: 106004]
> [glusterd-handler.c:5065:__glusterd_peer_rpc_notify] 0-management: Peer
> <10.32.1.144> (<6adf57dc-c619-4e56-ae40-90e6aef75fe9>), in state <Peer in
> Cluster>, has disconnected from glusterd." repeated 25 times between
> [2016-02-18 10:35:43.131945] and [2016-02-18 10:36:58.160458]
>
>
>
> If you are facing the same issue now, could you paste your # gluster peer
> status command output here.
>
> Thanks,
> ~Gaurav
>
> ----- Original Message -----
> From: "ABHISHEK PALIWAL" <abhishpaliwal@xxxxxxxxx>
> To: gluster-users@xxxxxxxxxxx
> Sent: Friday, February 19, 2016 2:46:35 PM
> Subject: Issue in Adding/Removing the gluster node
>
> Hi,
>
>
> I am working on two board setup connecting to each other. Gluster version
> 3.7.6 is running and added two bricks in replica 2 mode but when I manually
> removed (detach) the one board from the setup I am getting the following
> error.
>
> volume remove-brick c_glusterfs replica 1 10.32.1.144:/opt/lvmdir/c2/brick
> force : FAILED : Incorrect brick 10.32.1.144:/opt/lvmdir/c2/brick for
> volume c_glusterfs
>
> Please find the logs file as an attachment.
>
>
> Regards,
> Abhishek
>
>
> _______________________________________________
> Gluster-users mailing list
> Gluster-users@xxxxxxxxxxx
> http://www.gluster.org/mailman/listinfo/gluster-users
>
--
Regards
Abhishek Paliwal
--
Abhishek Paliwal
_______________________________________________ Gluster-users mailing list Gluster-users@xxxxxxxxxxx http://www.gluster.org/mailman/listinfo/gluster-users