On 29/05/19 3:59 AM, Alan Orth wrote:
Dear Ravishankar,
I'm not sure if Brick4 had pending AFRs because I don't
know what that means and it's been a few days so I am not sure
I would be able to find that information.
When you find some time, have a look at a blog series I wrote about AFR- I've
tried to explain what one needs to know to debug replication related
issues in it.
Anyways, after wasting a few days rsyncing the old brick to
a new host I decided to just try to add the old brick back
into the volume instead of bringing it up on the new host. I
created a new brick directory on the old host, moved the old
brick's contents into that new directory (minus the .glusterfs
directory), added the new brick to the volume, and then did
Vlad's find/stat trick¹ from the brick to the FUSE mount
point.
The interesting problem I have now is that some files don't
appear in the FUSE mount's directory listings, but I can
actually list them directly and even read them. What could
cause that?
Not sure, too many variables in the hacks that you did to take a
guess. You can check if the contents of the .glusterfs folder are in
order on the new brick (example hardlink for files and symlinks for
directories are present etc.) .
Regards,
Ravi
On
23/05/19 2:40 AM, Alan Orth wrote:
Dear list,
I seem to have gotten into a tricky situation.
Today I brought up a shiny new server with new disk
arrays and attempted to replace one brick of a replica
2 distribute/replicate volume on an older server using
the `replace-brick` command:
# gluster volume replace-brick homes
wingu0:/mnt/gluster/homes
wingu06:/data/glusterfs/sdb/homes commit force
The command was successful and I see the new brick
in the output of `gluster volume info`. The problem is
that Gluster doesn't seem to be migrating the data,
`replace-brick` definitely must heal (not migrate) the
data. In your case, data must have been healed from
Brick-4 to the replaced Brick-3. Are there any errors in
the self-heal daemon logs of Brick-4's node? Does Brick-4
have pending AFR xattrs blaming Brick-3? The doc is a bit
out of date. replace-brick command internally does all the
setfattr steps that are mentioned in the doc.
-Ravi
and now the original brick that I replaced is no
longer part of the volume (and a few terabytes of data
are just sitting on the old brick):
# gluster volume info homes | grep -E "Brick[0-9]:"
Brick1: wingu4:/mnt/gluster/homes
Brick2: wingu3:/mnt/gluster/homes
Brick3: wingu06:/data/glusterfs/sdb/homes
Brick4: wingu05:/data/glusterfs/sdb/homes
Brick5: wingu05:/data/glusterfs/sdc/homes
Brick6: wingu06:/data/glusterfs/sdc/homes
I see the Gluster docs have a more complicated
procedure for replacing bricks that involves
getfattr/setfattr¹. How can I tell Gluster about the
old brick? I see that I have a backup of the old
volfile thanks to yum's rpmsave function if that
helps.
We are using Gluster 5.6 on CentOS 7. Thank you for
any advice you can give.
_______________________________________________
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
https://lists.gluster.org/mailman/listinfo/gluster-users
--
|
_______________________________________________
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
https://lists.gluster.org/mailman/listinfo/gluster-users