Re: Glusterfs 4.1.6

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

thx Ashish for the clarification. Just another question... so the
commands in case of a hdd (lets say sdd) failure and identical brick
paths (mount: /gluster/bricksdd1) should look like this:

gluster volume reset-brick $volname /gluster/bricksdd1 start
>> change hdd, create partition & filesystem, mount <<
gluster volume reset-brick $volname $host:/gluster/bricksdd1
$host:/gluster/bricksdd1 commit force

Is it possible to change the mountpoint/brick name with this command?
In my case:
old: /gluster/bricksdd1_new
new: /gluster/bricksdd1
i.e. only the mount point is different.

gluster volume reset-brick $volname $host:/gluster/bricksdd1_new
$host:/gluster/bricksdd1 commit force

I would try to:
- gluster volume reset-brick $volname $host:/gluster/bricksdd1_new start
- reformat sdd etc.
- gluster volume reset-brick $volname $host:/gluster/bricksdd1_new
$host:/gluster/bricksdd1 commit force


thx
Hubert

Am Mo., 7. Jan. 2019 um 08:21 Uhr schrieb Ashish Pandey <aspandey@xxxxxxxxxx>:
>
> comments inline
>
> ________________________________
> From: "Hu Bert" <revirii@xxxxxxxxxxxxxx>
> To: "Ashish Pandey" <aspandey@xxxxxxxxxx>
> Cc: "Gluster Users" <gluster-users@xxxxxxxxxxx>
> Sent: Monday, January 7, 2019 12:41:29 PM
> Subject: Re:  Glusterfs 4.1.6
>
> Hi Ashish & all others,
>
> if i may jump in... i have a little question if that's ok?
> replace-brick and reset-brick are different commands for 2 distinct
> problems? I once had a faulty disk (=brick), it got replaced
> (hot-swap) and received the same identifier (/dev/sdd again); i
> followed this guide:
>
> https://docs.gluster.org/en/v3/Administrator%20Guide/Managing%20Volumes/
> -->> "Replacing bricks in Replicate/Distributed Replicate volumes"
>
> If i unterstand it correctly:
>
> - using replace-brick is for "i have an additional disk and want to
> move data from existing brick to new brick", old brick gets removed
> from volume and new brick gets added to the volume.
> - reset-brick is for "one of my hdds crashed and it will be replaced
> by a new one", the brick name stays the same.
>
> did i get that right? If so: holy smokes... then i misunderstood this
> completly (sorry @Pranith&Xavi). The wording is a bit strange here...
>
> >>>>>>>>>>>>>>>>>>>>>>>>>
> Yes, your understanding is correct.  In addition to above, one more use of reset-brick -
> If you want to change hostname of your server and bricks are having hostname, then you can use reset-brick to change from hostname to Ip address and then change the
> hostname of the server.
> In short, whenever  you want to change something on one of the brick while location and mount point are same, you should use reset-brick
> >>>>>>>>>>>>>>>>>>>>>>>>>
>
>
>
> Thx
> Hubert
>
> Am Do., 3. Jan. 2019 um 12:38 Uhr schrieb Ashish Pandey <aspandey@xxxxxxxxxx>:
> >
> > Hi,
> >
> > Some of the the steps provided by you are not correct.
> > You should have used reset-brick command which was introduced for the same task you wanted to do.
> >
> > https://docs.gluster.org/en/v3/release-notes/3.9.0/
> >
> > Although your thinking was correct but replacing a faulty disk requires some of the additional task which this command
> > will do automatically.
> >
> > Step 1 :- kill pid of the faulty brick in node  >>>>>> This should be done using "reset-brick start" command. follow the steps provided in link.
> > Step 2 :- running volume status, shows "N/A" under 'pid' & 'TCP port'
> > Step 3 :- replace disk and mount new disk in same mount point where the old disk was mounted
> > Step 4 :- run command "gluster v start volname force" >>>>>>>>>>>> This should be done using "reset-brick commit force" command. This will trigger the heal.  Follow the link.
> > Step 5 :- running volume status,  shows "N/A" under 'pid' & 'TCP port'
> >
> > ---
> > Ashish
> >
> > ________________________________
> > From: "Amudhan P" <amudhan83@xxxxxxxxx>
> > To: "Gluster Users" <gluster-users@xxxxxxxxxxx>
> > Sent: Thursday, January 3, 2019 4:25:58 PM
> > Subject:  Glusterfs 4.1.6
> >
> > Hi,
> >
> > I am working on Glusterfs 4.1.6 on a test machine. I am trying to replace a faulty disk and below are the steps I did but wasn't successful with that.
> >
> > 3 Nodes, 2 disks per node, Disperse Volume 4+2 :-
> > Step 1 :- kill pid of the faulty brick in node
> > Step 2 :- running volume status, shows "N/A" under 'pid' & 'TCP port'
> > Step 3 :- replace disk and mount new disk in same mount point where the old disk was mounted
> > Step 4 :- run command "gluster v start volname force"
> > Step 5 :- running volume status,  shows "N/A" under 'pid' & 'TCP port'
> >
> > expected behavior was a new brick process & heal should have started.
> >
> > following above said steps 3.10.1 works perfectly, starting a new brick process and heal begins.
> > But the same step not working in 4.1.6, Did I miss any steps? what should be done?
> >
> > Amudhan
> >
> > _______________________________________________
> > Gluster-users mailing list
> > Gluster-users@xxxxxxxxxxx
> > https://lists.gluster.org/mailman/listinfo/gluster-users
> >
> > _______________________________________________
> > Gluster-users mailing list
> > Gluster-users@xxxxxxxxxxx
> > https://lists.gluster.org/mailman/listinfo/gluster-users
> _______________________________________________
> Gluster-users mailing list
> Gluster-users@xxxxxxxxxxx
> https://lists.gluster.org/mailman/listinfo/gluster-users
>
_______________________________________________
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
https://lists.gluster.org/mailman/listinfo/gluster-users



[Index of Archives]     [Gluster Development]     [Linux Filesytems Development]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux