Fwd: Problems with self heal on replicated DHT

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi
I have now tested Glusterfs 3.0 on a different system with the same
self-heal problems I experienced in 2.0.6.

To summarize:
I use cluster/replicate and cluster/distribute
Brick 1 on server 1 gets replicated to brick 1 on server 2.
Brick 2 on server 1 gets replicated to brick 2 on server 2. etc...

If I delete a file called "movie.avi" from a brick behind Glusterfs back on
server 2, self-heal works fine with the "ls -lR" command.

If I delete a file called "movie.avi" from a brick behind Glusterfs back on
server 1, self-heal with "ls -lR" doesn't work.

If I delete a file called "movie.avi" from a brick behind Glusterfs back on
server 1, self-heal with "ls -lR movie.avi" works.

So for self-heal to work properly you need to specify the missing file. I
wonder how many do that? I think most people run "ls -lR" thinking
everything will be fine.

Hope this is useful.

Regards
Roland



2009/12/15 Raghavendra G <raghavendra at gluster.com>

> ---------- Forwarded message ----------
> From: Raghavendra G <raghavendra.hg at gmail.com>
> Date: 2009/12/15
> Subject: Re: Problems with self heal on replicated DHT
> To: Roland Rabben <roland at jotta.no>
> Cc: gluster-users at gluster.org
>
>
> Hi Roland,
>
> Can you try 3.0.0 and check whether the problem with self-heal persists?
>
> regards,
> On Tue, Dec 15, 2009 at 3:14 PM, Roland Rabben <roland at jotta.no> wrote:
>
> > Hi
> > I am using Gluster 2.06 on my servers and clients. My setup is a
> replicated
> > with cluster/replicate and then distribued with DHT.
> >
> >  My bricks are paired. Brick1 on server1 is replicated to Brick2 on
> server
> > 2
> > and so on.
> >
> > My problem is that it seems that self heal only works on the secondary
> > bricks (Brick2 in this example).
> >
> > For a test I can rm a file from Brick2 behind GlusterFS's back, run a
> self
> > heal on my dfs with ls -lR, and the file will be recovered on Brick2 as
> > expected.
> >
> > However if I remove a file from Brick1 and run the same self heal, it
> > doesn't work. Not only that, but the file is now gone from my dfs. (It is
> > still stored on Brick2)
> >
> > I have tried using option "lookup-unhashed on" and option
> "lookup-unhashed
> > yes". (There seem to be some confusion to what is the correct syntax in
> the
> > docs. The example<
> >
> http://gluster.com/community/documentation/index.php/Translators/cluster/distribute
> > >uses
> > "yes" and the text below says default is "on").
> >
> > Is this the expected behavior?
> >
> > Please advise on how to fix this. I want a self heal to fix both my
> primary
> > and secondary bricks.
> >
> > Regards
> >
> > Roland Rabben
> >
> > _______________________________________________
> > Gluster-users mailing list
> > Gluster-users at gluster.org
> > http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
> >
> >
>
>
> --
> Raghavendra G
>
>
>
>
> --
> Raghavendra G
>
> _______________________________________________
> Gluster-users mailing list
> Gluster-users at gluster.org
> http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
>
>


-- 
Roland Rabben
Founder & CEO Jotta AS
Cell: +47 90 85 85 39
Phone: +47 21 04 29 00
Email: roland at jotta.no


[Index of Archives]     [Gluster Development]     [Linux Filesytems Development]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux