No healing after replacing a brick in a replicated volume

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,
I'm testing the replicated volume wit a 3 VMs config:
gfs1:/export/sda3/brick
gfs2:/export/sda3/brick
gfsc as client

The volume name is gfs.
Gluster version in the test is 3.6.3, on CentOS 6.6.

A volume of 2 replica is made, and I try to simulate a brick fail by:
1. stop the glusterd and gluster processes on gfs1
2. unmount the brick
3. mkfs.xfs the brick
4. mount it back
5. start the gluster service
6. volume remove-brick gfs replica 1 gfs1:/export/sda3/brick force
7. volume add-brick gfs replica 2 gfs1:/export/sda3/brick

To this point, the "volume info gfs" shows the volume to be a 2-bricks replicate volume, which is fine.
But the gluster somehow thinks the volume doesn't need healing.
Issue the "volume heal gfs full" did not heal the volume, data did not copied from the gfs2 brick to gfs1.
Is the problem in the replace procedures or something else?
Please advise ;)

Mike
_______________________________________________
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
http://www.gluster.org/mailman/listinfo/gluster-users

[Index of Archives]     [Gluster Development]     [Linux Filesytems Development]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux