Re: two-node HA cluster failover test - failed again :(

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Wed, 09 Apr 2008 17:19:49 +0530 Vikas Gorur <vikas@xxxxxxxxxxxxx>
wrote:

> This is definitely something GlusterFS is designed to handle. I've
> set up this configuration in our lab and am looking into it. 

Excellent !

> Specifically, on dfsC you should have
> 
>   subvolumes gfs-dfsD-ds gfs-ds
> 
> and on dfsD you should have
> 
>   subvolumes gfs-ds gfs-dfsC-ds
> 
> Is this the case? If not, failover will not work.

Yes, this is the case.  For the sake of completeness, the server
configs are as follows :
dfsC : http://pastebin.ca/978276
dfsD : http://pastebin.ca/967749

And the client config :
dfsA : http://pastebin.ca/967754

Thank you all for your emails and interest in my test case.  I would
really like to use GlusterFS, since (on paper) it seems to have
everything we want for our current and upcoming projects.  Hopefully we
can find a simple solution to the problem !


-- 
Daniel Maher <dma AT witbe.net>




[Index of Archives]     [Gluster Users]     [Ceph Users]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Security]     [Bugtraq]     [Linux]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux