Hello, If anyone could help me with a strange issue I am experiencing: I run an EC2 stack, say v1, that has a gluster with three one brick nodes in the replication mode. I make an ebs volume level snapshot of one of the bricks (not the gluster level snapshot) Now I want to spin a new EC2 stack, v2, using that single snapshot from v1 to build all three bricks in v2. Ebs volumes are successfully built, and can see that all data is there, on all three bricks, I clean the bricks using the setfattr and rem ./glusterfs routine, start glusterd, probe peers, create a volume using the same volume name that already exists on the bricks, same replication mode, no errors, all is fine, but when I mount the volume I see only top level directory there, no files and subdirectories; 'heal volume-name full' and 'ls' do not change the situation, volume info and status report all is well, heal shows zero files. I am using latest glusterfs, 3.7.5 Thank you The information contained in this communication is intended for the use msk dccc60c6d2c3a6438f0cf467d9a4938 |
_______________________________________________ Gluster-users mailing list Gluster-users@xxxxxxxxxxx http://www.gluster.org/mailman/listinfo/gluster-users