Re: gfs mounted but not working

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Mon, Nov 06, 2006 at 01:38:40AM -0300, romero.cl@xxxxxxxxx wrote:
> Hi.
> 
> Now i'm trying this and it works! for now...
> 
> Two nodes: node3 & node4
> node4 export his /dev/sdb2 with gnbd_export as "node4_sdb2"
> node3 import node4's /dev/sdb2 with gnbd_import (new /dev/gnbd/node4_sdb2)
> 
> on node3:  gfs_mkfs -p lock_dlm -t node3:node3_gfs -j 4 /dev/gnbd/node4_sdb2
>                  mount -t gfs /dev/gnbd/node4_gfs /users/home
> 
> on node4: mount -t gfs /dev/sdb2 /users/home
> 
> and both nodes can read an write ths same files on /users/home!!!
> 
> Now i'm going for this:
> 
> 4 nodes on a dedicated 3com 1Gbit ethernet switch:
> 
> node2 exporting with gnbd_export /dev/sdb2 as "node2_sdb2"
> node3 exporting with gnbd_export /dev/sdb2 as "node3_sdb2"
> node4 exporting with gnbd_export /dev/sdb2 as "node4_sdb2"
> 
> node1 (main) will import all "nodeX_sdb2" and create a logical volume named
> "main_lv" including:
> 
>     /dev/sdb2 (his own)
>     /dev/gnbd/node2_sdb2
>     /dev/gnbd/node3_sdb2
>     /dev/gnbd/node4_sdb2
> 
> Next I will try to export the new big logical volume with "gnbd_export" and
> then do gnbd_import on each node.
> With that each node will see "main_lv", then mount it on /users/home as gfs
> and get a big shared filesystem to work toghether.
> 
> Is this the correct way to do the work??? possibly a deadlock???

Sorry. This will not work. There are a couple of problems.

1. A node shouldn't ever gnbd import a device it has exported.  This can
cause memory deadlock. When memory pressure is high nodes try to write
their buffers to disk. Once the buffer is written to disk, the node can drop
it from memory, reducing memory pressure. When you do this over gnbd, for every
buffer that you write out on the client, a new buffer request come into the gnbd
server. If you import a device you have exported (even indirectly through the
logical volume on node1 in this setup) that new request just comes back to you.
This means that you suddenly double your buffers in memory, just when memory was
running low.

The solution is to only access the local device directly, but never through
gnbd. Oh, just a note, if you are planning on accessing the local device
directly, you must not use the "-c" option when you are exporting the device.
This will eventually lead to corruption. The "-c" option is only for dedicated
gnbd servers.

2. Theoretically, you could just have every node export the devices to every
other node, and then build a logical volume on top of all the devices on each
node, but you should not do this. It totally destroys the benefit of having
a cluster. Since your GFS filesystem would then depend on having access to the
block devices of every machine, if ANY machine in your cluster went down, the
whole cluster would crash, because a piece of your filesystem would just
disappear.


Without shared storage, your gnbd server will be a single point of failure.
The most common way that people set up gnbd is with one dedicated gnbd server
machine, that is only used to serve gnbd blocks, so that it is unlikely to
crash.
 
> Sorry if my english isn't very good ;)
> 
> ----- Original Message ----- 
> From: "Kevin Anderson" <kanderso@xxxxxxxxxx>
> To: "linux clustering" <linux-cluster@xxxxxxxxxx>
> Sent: Sunday, November 05, 2006 10:12 PM
> Subject: Re:  gfs mounted but not working
> 
> 
> > On 11/5/06, *romero.cl@xxxxxxxxx <mailto:romero.cl@xxxxxxxxx>*
> > <romero.cl@xxxxxxxxx <mailto:romero.cl@xxxxxxxxx>> wrote:
> > >
> > >     Hi.
> > >
> > >     I'm trying your method, but still have a problem:
> > >
> > >     Note: /dev/db2/ is a local partition on my second SCSI hard drive
> > >     (no RAID)
> > >     runing on HP ProLiant.
> > >
> > GFS requires that all storage is equally accessible by all nodes in the
> > cluster.  Your other nodes have no path to the storage you set up so it
> > is impossible for them to share the data.
> >
> > Kevin
> >
> > --
> > Linux-cluster mailing list
> > Linux-cluster@xxxxxxxxxx
> > https://www.redhat.com/mailman/listinfo/linux-cluster
> 
> --
> Linux-cluster mailing list
> Linux-cluster@xxxxxxxxxx
> https://www.redhat.com/mailman/listinfo/linux-cluster

--
Linux-cluster mailing list
Linux-cluster@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/linux-cluster

[Index of Archives]     [Corosync Cluster Engine]     [GFS]     [Linux Virtualization]     [Centos Virtualization]     [Centos]     [Linux RAID]     [Fedora Users]     [Fedora SELinux]     [Big List of Linux Books]     [Yosemite Camping]

  Powered by Linux