Re: Replica self-heal issue (gluster 3.4.2)

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



files are present in dir1 and dir2 on node1, and dir1 in node2
dir2 on node2 is empty

On node 1:
[/mnt/gluster] # ls /home/harry/gluster/brick/dir*
/home/harry/gluster/brick/dir1:
a  b  c

/home/harry/gluster/brick/dir2:
a  b  c

[/mnt/gluster] # getfattr -m . -d -e hex /home/harry/gluster/brick/dir*
getfattr: Removing leading '/' from absolute path names
# file: home/harry/gluster/brick/dir1
system.posix_acl_access=0x0200000001000700ffffffff020007000000000002000500feff00000200070042420f000200050043420f000200050044420f0004000700ffffffff10000500ffffffff20000500ffffffff
system.posix_acl_default=0x0200000001000700ffffffff020007000000000002000500feff00000200070042420f000200050043420f000200050044420f0004000700ffffffff10000700ffffffff20000000ffffffff
trusted.afr.GV0_DATA-client-0=0x000000000000000000000000
trusted.afr.GV0_DATA-client-1=0x00000000ffffffffffffffff
trusted.gfid=0x28d502fdb1c94ce0ad631d4ab131add0
trusted.glusterfs.dht=0x000000010000000000000000ffffffff

# file: home/harry/gluster/brick/dir2
system.posix_acl_access=0x0200000001000700ffffffff020007000000000002000500feff00000200070042420f000200050043420f000200050044420f0004000700ffffffff10000500ffffffff20000500ffffffff
system.posix_acl_default=0x0200000001000700ffffffff020007000000000002000500feff00000200070042420f000200050043420f000200050044420f0004000700ffffffff10000700ffffffff20000000ffffffff
trusted.afr.GV0_DATA-client-0=0x000000000000000000000000
trusted.afr.GV0_DATA-client-1=0x000000000000000100000001
trusted.gfid=0x28d502fdb1c94ce0ad631d4ab131add0
trusted.glusterfs.dht=0x000000010000000000000000ffffffff


On node 2:
[/mnt/gluster] # ls /home/harry/gluster/brick/dir*
/home/harry/gluster/brick/dir1
a  b  c

/home/harry/gluster/brick/dir2

[/mnt/gluster] # getfattr -m . -d -e hex /home/harry/gluster/brick/dir*
getfattr: Removing leading '/' from absolute path names
# file: home/harry/gluster/brick/dir1
system.posix_acl_access=0x0200000001000700ffffffff020007000000000002000500feff00000200070042420f000200050043420f000200050044420f0004000700ffffffff10000500ffffffff20000500ffffffff
system.posix_acl_default=0x0200000001000700ffffffff020007000000000002000500feff00000200070042420f000200050043420f000200050044420f0004000700ffffffff10000700ffffffff20000000ffffffff
trusted.afr.GV0_DATA-client-0=0x000000000000000000000000
trusted.afr.GV0_DATA-client-1=0x000000000000000000000000
trusted.gfid=0x28d502fdb1c94ce0ad631d4ab131add0
trusted.glusterfs.dht=0x000000010000000000000000ffffffff

# file: home/harry/gluster/brick/dir2
system.posix_acl_access=0x0200000001000700ffffffff020007000000000002000500feff00000200070042420f000200050043420f000200050044420f0004000700ffffffff10000500ffffffff20000000ffffffff
system.posix_acl_default=0x0200000001000700ffffffff020007000000000002000500feff00000200070042420f000200050043420f000200050044420f0004000700ffffffff10000700ffffffff20000000ffffffff
trusted.gfid=0x28d502fdb1c94ce0ad631d4ab131add0


On Thu, Apr 17, 2014 at 11:37 AM, Ravishankar N <ravishankar@xxxxxxxxxx> wrote:
On 04/16/2014 07:47 PM, Jia-Hao Chen wrote:
Dear all,

I create a replicated volume with 2 nodes.
And I make a directory and create a few files in it.

[/mnt/gluster] # mkdir dir1
[/mnt/gluster] # echo 123 >> dir1/a
[/mnt/gluster] # echo 123 >> dir1/b
[/mnt/gluster] # echo 123 >> dir1/c

Then I bring down node 2 and rename dir1 to dir2

[/mnt/gluster] # mv dir1 dir2

After bring back node 2, dir1 reappear

[/mnt/gluster] # ls -l
drwxr-x---    2 harry    harry      4096 Apr 16 11:30 dir1/
drwxr-x---    2 harry    harry      4096 Apr 16 11:30 dir2/

It surprised me. But it makes sense since dir1 is still on node 2.
When node 2 come back, dir1 is healed on another node.
However when I list both dir1 and dir2, the file list is empty.

[/mnt/gluster] # ls -l dir1
[/mnt/gluster] # ls -l dir2
[/mnt/gluster] #

It seems a bug to me.
What is the expected result of this case?

This is strange. The expected result is that node 2 must also contain only dir2. i.e. the self-heal must happen from node 1 to node 2 once the latter comes back online.
Are the files present in the backend bricks?

-Ravi
Best regards,
Chen, Chia-Hao





_______________________________________________
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
http://supercolony.gluster.org/mailman/listinfo/gluster-users


_______________________________________________
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
http://supercolony.gluster.org/mailman/listinfo/gluster-users

[Index of Archives]     [Gluster Development]     [Linux Filesytems Development]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux