bug in self healing in lastest git 2.0.0pre33 and RC7

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



hello,
I find a bug in seal healing  :

Two AFR servers act also as client

gluster mount point /mnt/vdisk
gluster backend point /mnt/disk

1 - touch /mnt/vdisk/TEST1    :  ok on two server
2a - rm /mnt/disk/TEST1  on first server define on AFR translator
      -> ls -l  /mnt/vdisk  send empty for all server : ok
2b - ( not 2a) : rm /mnt/disk/TEST1  on second server define on AFR translator
      -> ls -l  /mnt/vdisk  send TEST1  for all server :  not OK

This is first bug , i think the problem comes that Load balancing not
working, command are always execute on same server, the first define
this problem is also coming with read-subvolumes not works.

3a -  ( second server is define as favorite child )  , no synchronise,
TEST1 never create ( normal that's always doing operation from server
1 ) .
   Now i write some data on /mnt/disk/TEST1 from second server ) then
 I touch /mnt/vdisk/TEST1 again => TEST1 synchronize on two server
with server 2 content : ok

In my point of views, ls /mnt/vdisk must not always get data from the
same server , isnt'it ?

I can correct this problem by do a touch on /mnt/vdisk on all files on
server backend 2 , so ls /mnt/vdisk send me 0 file size, but
favorite-child resynchronize with correct content.


To summarize
if i reinstall from zero a new server and in my conf client file, this
server appears as the first declare in afr subvolume, it can't be
synchronize with the second server.


Regards,
Nicolas Prochazka.




[Index of Archives]     [Gluster Users]     [Ceph Users]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Security]     [Bugtraq]     [Linux]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux