On 23/05/19 2:40 AM, Alan Orth wrote:
Dear list,
I seem to have gotten into a tricky situation. Today I
brought up a shiny new server with new disk arrays and
attempted to replace one brick of a replica 2
distribute/replicate volume on an older server using the
`replace-brick` command:
# gluster volume replace-brick homes
wingu0:/mnt/gluster/homes wingu06:/data/glusterfs/sdb/homes
commit force
The command was successful and I see the new brick in the
output of `gluster volume info`. The problem is that Gluster
doesn't seem to be migrating the data,
`replace-brick` definitely must heal (not migrate) the data. In
your case, data must have been healed from Brick-4 to the replaced
Brick-3. Are there any errors in the self-heal daemon logs of
Brick-4's node? Does Brick-4 have pending AFR xattrs blaming
Brick-3? The doc is a bit out of date. replace-brick command
internally does all the setfattr steps that are mentioned in the
doc.
-Ravi
and now the original brick that I replaced is no longer
part of the volume (and a few terabytes of data are just
sitting on the old brick):
# gluster volume info homes | grep -E "Brick[0-9]:"
Brick1: wingu4:/mnt/gluster/homes
Brick2: wingu3:/mnt/gluster/homes
Brick3: wingu06:/data/glusterfs/sdb/homes
Brick4: wingu05:/data/glusterfs/sdb/homes
Brick5: wingu05:/data/glusterfs/sdc/homes
Brick6: wingu06:/data/glusterfs/sdc/homes
I see the Gluster docs have a more complicated procedure
for replacing bricks that involves getfattr/setfattr¹. How can
I tell Gluster about the old brick? I see that I have a backup
of the old volfile thanks to yum's rpmsave function if that
helps.
We are using Gluster 5.6 on CentOS 7. Thank you for any
advice you can give.
_______________________________________________
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
https://lists.gluster.org/mailman/listinfo/gluster-users
|
_______________________________________________
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
https://lists.gluster.org/mailman/listinfo/gluster-users