On 12/06/19 1:38 PM, Alan Orth wrote:
Dear Ravi,
Thanks for the confirmation—I replaced a brick in a volume
last night and by the morning I see that Gluster has
replicated data there, though I don't have any indication of
its progress. The `gluster v heal volume info` and `gluster v
heal volume info split-brain` are all looking good so I guess
that's enough of an indication.
Yes, right now, heal info showing no files is the indication. A new
command for pending heal time estimation is something that is being
worked upon. See https://github.com/gluster/glusterfs/issues/643
One question, though. Immediately after I replaced the
brick I checked `gluster v status volume` and I saw the
following:
Task Status of Volume volume
------------------------------------------------------------------------------
Task : Rebalance
ID : a890e99c-5715-4bc1-80ee-c28490612135
Status : not started
I did not initiate a rebalance, so it's strange to see it
there. Is Gluster hinting that I should start a rebalance? If
a rebalance is "not started" shouldn't Gluster just not show
it at all?
`replace-brick` should not show rebalance status. Not sure why
you're seeing it. Adding Nithya for help.
Regarding the patch to the
documentation: absolutely! Let me just get my Gluster back in
order after my confusing upgrade last month. :P
Great. Please send the PR for the
https://github.com/gluster/glusterdocs/ project. I think
docs/Administrator Guide/Managing Volumes.md is the file that
needs to be updated.
-Ravi
On
11/06/19 9:11 PM, Alan Orth wrote:
Dear list,
In a recent discussion on this list Ravi suggested
that the documentation for replace-brick¹ was out of
date. For a distribute–replicate volume the
documentation currently says that we need to kill the
old brick's PID, create a temporary empty directory on
the FUSE mount, check the xattrs, replace-brick with
commit force.
Is all this still necessary? I'm running Gluster
5.6 on CentOS 7 with a distribute–replicate volume.
No, all these very steps are 'codified' into the `replace
brick commit force` command via https://review.gluster.org/#/c/glusterfs/+/10076/
and https://review.gluster.org/#/c/glusterfs/+/10448/.
You can see the commit messages of these 2 patches for more
details.
You can play around with most of these commands in a 1 node
setup if you want to convince yourself that they work. There
is no need to form a cluster.
[root@tuxpad glusterfs]# gluster v create testvol
replica 3 127.0.0.2:/home/ravi/bricks/brick{1..3} force
[root@tuxpad glusterfs]# gluster v start testvol
[root@tuxpad glusterfs]# mount -t glusterfs
127.0.0.2:testvol /mnt/fuse_mnt/
[root@tuxpad glusterfs]# touch /mnt/fuse_mnt/FILE
[root@tuxpad glusterfs]# ll
/home/ravi/bricks/brick*/FILE
-rw-r--r--. 2 root root 0 Jun 11 21:55
/home/ravi/bricks/brick1/FILE
-rw-r--r--. 2 root root 0 Jun 11 21:55
/home/ravi/bricks/brick2/FILE
-rw-r--r--. 2 root root 0 Jun 11 21:55
/home/ravi/bricks/brick3/FILE
[root@tuxpad glusterfs]# gluster v replace-brick
testvol 127.0.0.2:/home/ravi/bricks/brick3
127.0.0.2:/home/ravi/bricks/brick3_new commit force
volume replace-brick: success: replace-brick commit
force operation successful
[root@tuxpad glusterfs]# ll
/home/ravi/bricks/brick3_new/FILE
-rw-r--r--. 2 root root 0 Jun 11 21:55
/home/ravi/bricks/brick3_new/FILE
Why don't you send a patch to update the doc for
replace-brick? I'd be happy to review it. ;-)
HTH,
Ravi
_______________________________________________
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
https://lists.gluster.org/mailman/listinfo/gluster-users
--
|
_______________________________________________
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
https://lists.gluster.org/mailman/listinfo/gluster-users