AFR questions

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



At 09:07 PM 12/5/2008, Kirby Zhou wrote:
>For example:
>
>volume ns-afr0
>   type cluster/afr
>   subvolumes remote-ns1 remote-ns2 remote-ns3 remote-ns4
>end-volume
>
>Anything written to ns-afr0 will be AFRed to all the 4 subvolumnes.
>So  how many copies you want to get, how many subvolumnes you should set.
>
>But I failed to activate the auto-healing function.
>
>Step1:  I create a client-AFR based unify, both ns and storage are AFRed. I
>name the 2 nodes node1 and node2.
>Step2:  glusterfs -s node1 -n unify0 /mnt
>Step3:  cp something /mnt/xxx
>Step4:  Check node1 and node2's storage, found 2 copy of the file xxx.
>Step5:  Stop node2's glusterfd
>Step6:  cat something else >> /mnt/xxx
>Step7:  Stat node2's glusterfd
>Step8:  Sleep 100
>Step9:  Check node2's storage, found the file xxx with no change through

did you cat the file through the gluster mount point or on the 
underlying filesystem.
the auto-heal is automatically "activated"  but it only "heal's" on 
file access, so if you access the file through the gluster mountpoint 
it should find that it's out of date and update from one of the other servers.

check your gluster log. grep for your filename and see what it might 
say (on both servers)





[Index of Archives]     [Gluster Development]     [Linux Filesytems Development]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux