Re: [Linux-cluster] GNBD & Network Outage

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Fri, Feb 04, 2005 at 10:26:36AM +0000, Nigel Jewell wrote:
> Dear Ben,
> 
> Thank you for your detailed reply.  It is always refreshing to get a 
> decent response on a mailing list ;) .
>

<snip>
 
> My understanding was that the "-c" put the device in cached mode, as 
> described here:

You are correct.
 
> http://www.redhat.com/docs/manuals/csgfs/admin-guide/s1-gnbd-commands.html
> 
> Or are you saying that by not putting the "-c" put its in uncached mode?

Yes, that's what I meant to say.  so much for decent responses :P 

> The intention of the setup was to have two hosts both exporting an 
> unmounted device, and the alternative device using it as a RAID-1 
> device.  Then to use heartbeat to mount and unmount the partitions as 
> required.  For example:
> 
> HOST A:
> 
> /dev/hda1 (md0, ext3, mounted)
> /dev/hda2 (ext3, unmounted, gnbd_exported as A)
> /dev/gnbd/B (md0, ext3, mounted)
> 
> HOST B:
> 
> /dev/hda1 (ext3, unmounted, gnbd_exported as B)
> /dev/hda2 (md0, ext3, mounted)
> /dev/gnbd/A (md0, ext3, mounted)
> 
> I hope that makes sense.

O.k. let me see if I get this.

hostA is setting up a mirror on /dev/hda1 and /dev/gnbd/B
hostB is setting up a mirror on /dev/hda2 and /dev/gnbd/A

So, if hostA goes down, you will be able to access its data on /dev/hda1
of hostB. If hostB goes down, you will be able to access its data on /dev/hda2
of hostA.

> If so, does what we are trying to achieve sound sensible?

Yeah, you can do that.

>Any gotchas/advice?

The obvious issue is, for example, if hostB goes down and you are accessing
it's data through /dev/hda2 on hostA, when hostB comes back up, you must
unmount /dev/hda2 on hostA before you export it. Alos, even if hostA never does
any writing to /dev/hda2, the data on it may not be in sync with the data
on /dev/hda2 of hostB, so you will need to resync them when hostB comes back.

For this, there isn't any real reason to use gnbd over nbd... nbd will
failout right away, which is annoying when it's caused by some transient
network issue, but you don't need to have a clustermanager set up to
use it. 

Another option which might work is to use gnbd in cached mode, but when you
decide that the other node really isn't there, run
# gnbd_import -rO <device>
This will flush the requests from the device. The /dev/gnbd/<device> file will
also be removed, which may piss off your mirror. However, if this works, you
get the benefit of retrying the connection until heartbeat decides the other
node is really dead.

Hope this helps.

-Ben
> 
> -- 
> Nige.
> 
> PixExcel Limited
> URL: http://www.pixexcel.co.uk
> 
> --
> 
> Linux-cluster@xxxxxxxxxx
> http://www.redhat.com/mailman/listinfo/linux-cluster


[Index of Archives]     [Corosync Cluster Engine]     [GFS]     [Linux Virtualization]     [Centos Virtualization]     [Centos]     [Linux RAID]     [Fedora Users]     [Fedora SELinux]     [Big List of Linux Books]     [Yosemite Camping]

  Powered by Linux