Re: AFR write completion? AFR read redundancy?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



--- Anand Avati <avati@xxxxxxxxxxxxx> wrote:
> > Will self healing prevent an inconsistent cluster
> > from happening?  I.E. Two node cluster, A+B.
> >
> > 1) Node A goes down
> > 2) Write occurs on Node B
> > 3) Node B goes down (cluster is down)
> > 4) Node A comes up -> cluster is inconsistent
> since B
> > is not yet available.  Cluster should still be
> "down".
> 
> 
> This is not assured to work. The intersection of the
> two subsets of subvolumes before and after a group 
> (subset) of nodes are added or removed should not be

> empty.

Hmm, that is what I feared!  Are there any plans to
ensure that this condition is met?  Without this, how
do people currently trust AFR?  Do they simply assume
that their cluster never cold boots?

Since it does not sound like self healing will ensure
cluster consistency, is there another planned
task/feature that will?  If not, is it because it is
viewed as impossible/difficult?  It seems like in the
extreme case it would be at least simple enough to
track and prevent, wouldn't it?

Once a cluster is up and running, any remaining
running nodes should be consistent?  So it seems like
the tricky part is dealing with cold boots (when no
running consistent cluster exists.)  

Would it be possible to implement the easiest (but
obviously not the best) solution to this (at least for
server side AFR) -> simply ensure that all nodes are
online after a cold boot before making the cluster FS
available to clients.  I realize there are drawbacks
to this simplistic solution and I can think of several
optimizations to improve things, but I can't see how I
could use/trust AFR without even the most simplistic
solution ensuring cold boot consistency.  

Does this simply not matter to others, or have they
not realized that this is a potential problem with the
current design, or do they expect self healing to
solve this?

Thanks,

-Martin


P.S. This is the only blocking point preventing me
from dropping my current DRBD solution.  In most other
ways glusterfs has (or plans to have) at least feature
parity with DRBD, and in many ways, of course, it is
way more advanced than a DRBD solution.  Cold boot
consistency is hard to live without though!



      ____________________________________________________________________________________
Looking for last minute shopping deals?  
Find them fast with Yahoo! Search.  http://tools.search.yahoo.com/newsearch/category.php?category=shopping




[Index of Archives]     [Gluster Users]     [Ceph Users]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Security]     [Bugtraq]     [Linux]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux