Re: Re: Unexpected behaviour when self-healing

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



I finally got it working with client side AFR only. I assumed that
self-healing in Unify was the only possibility but the self-healing in AFR
already does everything I want currently. Great!

However, I am thinking about some sort of "balance" translator that is able
to balance files e.g. with a replication count of 3 over all underlying
datastores. Let's assume all clients configured like this with the
immaginary balance translator:

volume balance
  type cluster/balance
  subvolumes www1, www2, www3, www4, www5
  option switch *:3
  option scheduler rr
  option namespace www-ns
end-volume

A single file of any type will be balanced to three random servers of the
five overall servers for redundancy and failure protection. Is this already
possible? With mixing AFR and Unify there seem to be some possibilities that
let me chose where to explicitly store which file types but is such a real
redundant setup also possible? Google also uses a similar approach afaik and
it is the concept used in MogileFS. This could also be used for AFR (switch
*:5) or Striping (switch *:1) somewhat easier as it currently is, I think.
Adding and removing servers would be very easy, too, just by checking all
files for consistense (let's say by using ls -lR).

regards
Daniel


[Index of Archives]     [Gluster Users]     [Ceph Users]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Security]     [Bugtraq]     [Linux]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux