Re: Removing subvolume from dist/rep volume

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 





On Fri, 28 Jun 2019 at 14:34, Dave Sherohman <dave@xxxxxxxxxxxxx> wrote:
On Thu, Jun 27, 2019 at 12:17:10PM +0530, Nithya Balachandran wrote:
> On Tue, 25 Jun 2019 at 15:26, Dave Sherohman <dave@xxxxxxxxxxxxx> wrote:
> > My objective is to remove nodes B and C entirely.
> >
> > First up is to pull their bricks from the volume:
> >
> > # gluster volume remove-brick myvol B:/data C:/data A:/arb1 start
> > (wait for data to be migrated)
> > # gluster volume remove-brick myvol B:/data C:/data A:/arb1 commit
> >
> >
> There are some edge cases that may prevent a file from being migrated
> during a remove-brick. Please do the following after this:
>
>    1. Check the remove-brick status for any failures.  If there are any,
>    check the rebalance log file for errors.
>    2. Even if there are no failures, check the removed bricks to see if any
>    files have not been migrated. If there are any, please check that they are
>    valid files on the brick and copy them to the volume from the brick to the
>    mount point.
>
> The rest of the steps look good.

Apparently, they weren't quite right.  I tried it and it just gives me
the usage notes in return.  Transcript of the commands and output is below.

Any insight on how I got the syntax wrong?

--- cut here ---
root@merlin:/# gluster volume status
Status of volume: palantir
Gluster process                             TCP Port  RDMA Port  Online  Pid
------------------------------------------------------------------------------
Brick saruman:/var/local/brick0/data        49153     0          Y       17995
Brick gandalf:/var/local/brick0/data        49153     0          Y       9415
Brick merlin:/var/local/arbiter1/data       49170     0          Y       35034
Brick azathoth:/var/local/brick0/data       49153     0          Y       25312
Brick yog-sothoth:/var/local/brick0/data    49152     0          Y       10671
Brick merlin:/var/local/arbiter2/data       49171     0          Y       35043
Brick cthulhu:/var/local/brick0/data        49153     0          Y       21925
Brick mordiggian:/var/local/brick0/data     49152     0          Y       12368
Brick merlin:/var/local/arbiter3/data       49172     0          Y       35050
Self-heal Daemon on localhost               N/A       N/A        Y       1209
Self-heal Daemon on saruman.lub.lu.se       N/A       N/A        Y       23253
Self-heal Daemon on gandalf.lub.lu.se       N/A       N/A        Y       9542
Self-heal Daemon on mordiggian.lub.lu.se    N/A       N/A        Y       11016
Self-heal Daemon on yog-sothoth.lub.lu.se   N/A       N/A        Y       8126
Self-heal Daemon on cthulhu.lub.lu.se       N/A       N/A        Y       30998
Self-heal Daemon on azathoth.lub.lu.se      N/A       N/A        Y       34399

Task Status of Volume palantir
------------------------------------------------------------------------------
Task                 : Rebalance           
ID                   : e58bc091-5809-4364-af83-2b89bc5c7106
Status               : completed           

root@merlin:/# gluster volume remove-brick palantir saruman:/var/local/brick0/data gandalf:/var/local/brick0/data merlin:/var/local/arbiter1/data



You had it  right in the first email.

 gluster volume remove-brick palantir replica 3 arbiter 1 saruman:/var/local/brick0/data gandalf:/var/local/brick0/data merlin:/var/local/arbiter1/data start


Usage:
volume remove-brick <VOLNAME> [replica <COUNT>] <BRICK> ... <start|stop|status|commit|force>

root@merlin:/# gluster volume remove-brick palantir replica 3 arbiter 1 saruman:/var/local/brick0/data gandalf:/var/local/brick0/data merlin:/var/local/arbiter1/data

Usage:
volume remove-brick <VOLNAME> [replica <COUNT>] <BRICK> ... <start|stop|status|commit|force>

root@merlin:/# gluster volume remove-brick palantir replica 3 saruman:/var/local/brick0/data gandalf:/var/local/brick0/data merlin:/var/local/arbiter1/data

Usage:
volume remove-brick <VOLNAME> [replica <COUNT>] <BRICK> ... <start|stop|status|commit|force>
--- cut here ---

--
Dave Sherohman
_______________________________________________
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
https://lists.gluster.org/mailman/listinfo/gluster-users
_______________________________________________
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
https://lists.gluster.org/mailman/listinfo/gluster-users

[Index of Archives]     [Gluster Development]     [Linux Filesytems Development]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux