Problems with self heal on replicated DHT

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Roland,

Can you try 3.0.0 and check whether the problem with self-heal persists?

regards,
On Tue, Dec 15, 2009 at 3:14 PM, Roland Rabben <roland at jotta.no> wrote:

> Hi
> I am using Gluster 2.06 on my servers and clients. My setup is a replicated
> with cluster/replicate and then distribued with DHT.
>
>  My bricks are paired. Brick1 on server1 is replicated to Brick2 on server
> 2
> and so on.
>
> My problem is that it seems that self heal only works on the secondary
> bricks (Brick2 in this example).
>
> For a test I can rm a file from Brick2 behind GlusterFS's back, run a self
> heal on my dfs with ls -lR, and the file will be recovered on Brick2 as
> expected.
>
> However if I remove a file from Brick1 and run the same self heal, it
> doesn't work. Not only that, but the file is now gone from my dfs. (It is
> still stored on Brick2)
>
> I have tried using option "lookup-unhashed on" and option "lookup-unhashed
> yes". (There seem to be some confusion to what is the correct syntax in the
> docs. The example<
> http://gluster.com/community/documentation/index.php/Translators/cluster/distribute
> >uses
> "yes" and the text below says default is "on").
>
> Is this the expected behavior?
>
> Please advise on how to fix this. I want a self heal to fix both my primary
> and secondary bricks.
>
> Regards
>
> Roland Rabben
>
> _______________________________________________
> Gluster-users mailing list
> Gluster-users at gluster.org
> http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
>
>


-- 
Raghavendra G


[Index of Archives]     [Gluster Development]     [Linux Filesytems Development]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux