Re: Re: If I have 5 GNBD server?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Mon, Aug 29, 2005 at 08:49:10AM +0700, Fajar A. Nugraha wrote:
> brianu wrote:
> 
> >Hello all,
> >
> > 
> >
> >This is a question I have basically been asking, the question on why 
> >you would want to do it is failover, the docs at 
> >http://sourceware.org/cluster/gnbd/gnbd_usage.txt state that 
> >dm-multipath is an option for gnbd,
> >
> I'm not sure about dm-multipath. The thing is, when a gnbd server dies, 
> instead of saying "read/write failed" as normal block device does, gnbd 
> simply retries the request and tries to reconnect if it's disconnected. 
> Forever.

If the gnbds are exported uncached (the default), the client will fail back IO
if it can no longer talk to the server after a specified timeout.  However
the userspace tools for dm-multipath are still to SCSI-centric to allow you
to run on top of gnbd.  You can manually run dm-setup commands to build
the appropriate multipath map, scan the map to check if a path has failed,
remove the failed gnbd from the map (so the device can close and gnbd can
start trying to reconnect), and them manually add the gnbd device back into
the map when it has reconnected. That's pretty much all the dm-multipath
userspace tools do.  Someone could even write a pretty simple daemon that did
this, and become the personal hero of many people on this list.

The only problem is that if you manually execute the commands, or write the
daemon in bash or some other scripting language, you can run into a memory
deadlock. If you are in a very low memory situation, and you need to complete
gnbd IO requests to free up memory, the daemon can't allocate any memory in
doing it's job.

> >and documents elsewhere also indicate that GNBD can be configured as a 
> >redundancy, yet I cannot find any documentation on how to configure it.
> >
> > 
> >
> >If using LVM to make a volume of imported gnbds is not the answer for 
> >redundancy can anyone suggest a method that is? Im not opposed to 
> >using any other resource of cluster or GFS but I would really like to 
> >implement a redundant solution, ( gnbd, gulm, etc.).
> >
> > 
> >
> It would be possible if you have at least two servers, connected to the 
> same storage, running as gnbd server and exporting the same block devices.
> 
> You need to have one IP address that can failover to any available node 
> (use rgmanager or keepalived to achieve this). That way, if one server 
> node dies the IP address will be moved to the other node. Client will be 
> disconnected, but since gnbd-import will automatically reconnect (it 
> actually connects to a different node since the gnbd server IP address 
> was moved) the process will be transparent to the client (all they see 
> is a slight delay during reconnect).

If you have the gnbd exported in caching mode, each server will maintain it's
own cache, So if you write a block to one server, and then the server crashes,
when you read the block from the second server, if it was already cached
before the read, you will get invalid data, so that won't work. If you
set the gnbd to uncached mode, the client will fail the IO back, and something
(a multipath driver) needs to be there to reissue the request.

-Ben
 
> Regards,
> 
> Fajar
> 
> --
> 
> Linux-cluster@xxxxxxxxxx
> http://www.redhat.com/mailman/listinfo/linux-cluster

--

Linux-cluster@xxxxxxxxxx
http://www.redhat.com/mailman/listinfo/linux-cluster

[Index of Archives]     [Corosync Cluster Engine]     [GFS]     [Linux Virtualization]     [Centos Virtualization]     [Centos]     [Linux RAID]     [Fedora Users]     [Fedora SELinux]     [Big List of Linux Books]     [Yosemite Camping]

  Powered by Linux