Adding a previously removed brick to the gluster volume leaves the gluster mount empty

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi all,
I hope someone can help me.

Adding a previously removed brick to the gluster volume leaves the gluster mount empty when running ls.

Steps to reproduce
Create a gluster volume with two bricks

On brick 1:
1. mkdir -p /data/brick/gv0
2. gluster volume create gv0 replica 2 192.168.0.2:/data/brick/gv0 192.168.0.3:/data/brick/gv0 force (after brick 2 step 2)
3. gluster volume start gv0
4. mkdir gluster
5. mount -t glusterfs 192.168.0.2:/gv0 gluster
6. Populate the newly created mount point with some files
7. ls -la gluster <- note list of files
8. ls -la gluster <- verify that the list of files is the same as in previous step (after brick 2 step 6)
9. ls -la gluster <- note that all files are gone (after brick 2 step 8)
10. ls -la /data/brick/gv0/ <- note that the backing store of brick 1 is still intact and no files or gfids appear to have been lost

On brick 2:
1. mkdir -p /data/brick/gv0
2. gluster peer probe 192.168.0.2
3. mkdir gluster
4. mount -t glusterfs 192.168.0.2:/gv0 gluster (after brick 1 step 3)
5. gluster volume remove-brick gv0 replica 1 192.168.0.3:/data/brick/gv0 force (after brick 1 step 7)
6. rm -rf /data/brick/gv0/
7. gluster volume add-brick gv0 replica 2 192.168.0.3:/data/brick/gv0 force
8. ls -la gluster <- note that all files are gone
9. ls -la /data/brick/gv0/ <- note that the backing store of brick 2 is empty

Result
At this point the folder "gluster" is normally completely empty on both bricks. If the order of brick 1 step 9 and brick 2 step 8 is reversed and you wait for brick 1 step 9 to complete the problem is usually not seen .

Additional info
Ways of recovering:
1. ls -la gluster/filename (for each file)
       makes files visible again but does not seem to guarantee that a synchronization has completed.
2. find gluster/filename | xargs tail -c 1 > /dev/null 2>&1
       seems to do the same as step 1 but files now appear to be fully synchronized on command completion.
3. gluster volume heal gv0 full
      performs a full synchronization of the nodes without the drawback mentioned for step 1 and 2 but it is asynchronous which is not what we want.



The questions are as follow:
1.Why has all files gone in the directory gluster after step7 on brick2?
2. If the gluster has a synchronization command which achieve the same function as  "gluster volume heal gv0 full"?



Test was carried out on 3.6.x and different versions of 3.7 with 3.7.6 being the latest version tested.






Thanks,
Xin
_______________________________________________
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
http://www.gluster.org/mailman/listinfo/gluster-users

[Index of Archives]     [Gluster Development]     [Linux Filesytems Development]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux