Re: Glusterfs 4.1.6

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Ashish & all others,

if i may jump in... i have a little question if that's ok?
replace-brick and reset-brick are different commands for 2 distinct
problems? I once had a faulty disk (=brick), it got replaced
(hot-swap) and received the same identifier (/dev/sdd again); i
followed this guide:

https://docs.gluster.org/en/v3/Administrator%20Guide/Managing%20Volumes/
-->> "Replacing bricks in Replicate/Distributed Replicate volumes"

If i unterstand it correctly:

- using replace-brick is for "i have an additional disk and want to
move data from existing brick to new brick", old brick gets removed
from volume and new brick gets added to the volume.
- reset-brick is for "one of my hdds crashed and it will be replaced
by a new one", the brick name stays the same.

did i get that right? If so: holy smokes... then i misunderstood this
completly (sorry @Pranith&Xavi). The wording is a bit strange here...

Thx
Hubert

Am Do., 3. Jan. 2019 um 12:38 Uhr schrieb Ashish Pandey <aspandey@xxxxxxxxxx>:
>
> Hi,
>
> Some of the the steps provided by you are not correct.
> You should have used reset-brick command which was introduced for the same task you wanted to do.
>
> https://docs.gluster.org/en/v3/release-notes/3.9.0/
>
> Although your thinking was correct but replacing a faulty disk requires some of the additional task which this command
> will do automatically.
>
> Step 1 :- kill pid of the faulty brick in node  >>>>>> This should be done using "reset-brick start" command. follow the steps provided in link.
> Step 2 :- running volume status, shows "N/A" under 'pid' & 'TCP port'
> Step 3 :- replace disk and mount new disk in same mount point where the old disk was mounted
> Step 4 :- run command "gluster v start volname force" >>>>>>>>>>>> This should be done using "reset-brick commit force" command. This will trigger the heal.  Follow the link.
> Step 5 :- running volume status,  shows "N/A" under 'pid' & 'TCP port'
>
> ---
> Ashish
>
> ________________________________
> From: "Amudhan P" <amudhan83@xxxxxxxxx>
> To: "Gluster Users" <gluster-users@xxxxxxxxxxx>
> Sent: Thursday, January 3, 2019 4:25:58 PM
> Subject:  Glusterfs 4.1.6
>
> Hi,
>
> I am working on Glusterfs 4.1.6 on a test machine. I am trying to replace a faulty disk and below are the steps I did but wasn't successful with that.
>
> 3 Nodes, 2 disks per node, Disperse Volume 4+2 :-
> Step 1 :- kill pid of the faulty brick in node
> Step 2 :- running volume status, shows "N/A" under 'pid' & 'TCP port'
> Step 3 :- replace disk and mount new disk in same mount point where the old disk was mounted
> Step 4 :- run command "gluster v start volname force"
> Step 5 :- running volume status,  shows "N/A" under 'pid' & 'TCP port'
>
> expected behavior was a new brick process & heal should have started.
>
> following above said steps 3.10.1 works perfectly, starting a new brick process and heal begins.
> But the same step not working in 4.1.6, Did I miss any steps? what should be done?
>
> Amudhan
>
> _______________________________________________
> Gluster-users mailing list
> Gluster-users@xxxxxxxxxxx
> https://lists.gluster.org/mailman/listinfo/gluster-users
>
> _______________________________________________
> Gluster-users mailing list
> Gluster-users@xxxxxxxxxxx
> https://lists.gluster.org/mailman/listinfo/gluster-users
_______________________________________________
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
https://lists.gluster.org/mailman/listinfo/gluster-users



[Index of Archives]     [Gluster Development]     [Linux Filesytems Development]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux