On Fri, 2007-12-07 at 07:13 +0100, Ante Karamatić wrote: > On Thu, 06 Dec 2007 17:34:21 -0500 > Lon Hohberger <lhh@xxxxxxxxxx> wrote: > > > fabioc (!= fabbione) and I did this today: > > > > http://gfs.wikidev.net/DRBD_Cookbook > > > > I only did very basic testing, but it's a start. > > I'm using exactly the same setup, except fencing trough DRBD, on Ubuntu > and can only confirm that this works as expected. Could you expand on this -- do you have the script that drbd is calling for fencing? I assume you're using 'resource', right? "Ssh to the other guy and issue some command..." -- Lon >On Fri, 2007-12-07 at 07:13 +0100, Ante Karamatić wrote: > On Thu, 06 Dec 2007 17:34:21 -0500 > Lon Hohberger <lhh@xxxxxxxxxx> wrote: > > > fabioc (!= fabbione) and I did this today: > > > > http://gfs.wikidev.net/DRBD_Cookbook > > > > I only did very basic testing, but it's a start. > > I'm using exactly the same setup, except fencing trough DRBD, on Ubuntu > and can only confirm that this works as expected. > >Could you expand on this -- do you have the script that drbd is calling >for fencing? > >I assume you're using 'resource', right? "Ssh to the other guy and >issue some command..." > >-- Lon We are using a similar setup with a three node cluster. The third node mounts the gfs volumes through a manged NFS service. All three cluster nodes act as servers for diskless nodes ( XDMCP through LVS). We have observed few issues though. 1) On the drbd nodes, we have root partition on a logical volume. Also our drbd+gfs disks are clustered LV's; So we had to manually restart clvmd after drbd in order for the gfs volumes to be active. 2) The manged NFS service refuses to failover. I am not sure whether this is because of manual fencing. Our APC MasterSwitch is expected shortly so will know more about NFS failover after we have proper fencing setup. I would be very interested in trying this fencing through DRBD.. 3) The disk IO is very slow. Almost a bottleneck. I wonder getting rid of the LV's & making gfs directly on the drbd device might help? Another question may be OT sorry if so. Is there a way to failover the diskless nodes to other cluster server in case of one cluster server going down? With warm regards Koustubha Kale Forgot the famous last words? Access your message archive online at http://in.messenger.yahoo.com/webmessengerpromo.php -- Linux-cluster mailing list Linux-cluster@xxxxxxxxxx https://www.redhat.com/mailman/listinfo/linux-cluster