Yazan, You can implement RLM (Redundant Lock Manager) on RHEL3 with 3 servers. All of the lock servers do NOT need access to the shared storage, although they can have access to the shared storage. The file locking in all handled over the heartbeat LAN. I believe that at least 1 of the lockservers needs access to the shared storage, but I have not tried any less than X -1 if X is the total number of lock servers. The only important rule of thumb to keep in mind is that when the lockserver has access to the shared storage, it must be fenced from BOTH the disk and the network. If the lockserver only has access to the network, it should be fenced from the network only. In both cases, a network power switch is the preferred fencing method. If a non-lockserver GFS server (ie -- node) is accessing the shared storage, it can be fenced from the storage via a fiber switch. The method for implementing a non-disk attached RLM lockserver is to make a copy of the current CCS information on a disk-attached lockserver, copy it to the non-disk attached lockserver, and then make a CCS file archive on the non-disk attached lockserver with ccs_tool. Incidentally, it is far easier to set this up if you make one of the disk-attached lockservers the master lockserver. Once you have copied the cluster config to a directory on the non-disk attached lockserver, edit the /etc/sysconfig/ccsd CCS_ARCHIVE parameter so that it points to the local directory that contains the config files. Then, wafter you have copied the files to the non-disk attached lockserver, all you need to do to generate the files is run cc_tool -v create /etc/gfs/data0 /etc/gfs/data0.cca. (of course, your pathnames will vary.) Next, start ccsd on the non-disk lockserver with ccsd -f /path/to/file.cca and then start lock_gulmd. You can verify the new lockserver in /var/log/messages on the master lock server. Then test fencing and you are done! I hope this helps. Cheers, jacob > -----Original Message----- > From: linux-cluster-bounces@xxxxxxxxxx > [mailto:linux-cluster-bounces@xxxxxxxxxx] On Behalf Of Yazan > Al-Sheyyab > Sent: Wednesday, July 06, 2005 1:38 AM > To: linux clustering > Subject: Re: GFS installation > > hi haydar, > > my friend i was having the same case : which is a two ml > 370 hp proliant nodes and a shared storage msa500 and the two > nodes connected to the shared by SCSI cable , the problem was > with the lock server when implementing the GFS. > > im using RHEL_ES_V3_U4 and a GFS for V3 U4 also, but here in > this GFS release you have to have an odd number of lock > server , i mean that when you have two server so you have to > have a three lock server , but in my case i have used only > the first node as lock server and then i reached a poor > redunduncy in the cluster cause i still studying the purchase > of a third server., i heared that there is a release of GFS > which is GFS 6.1 working with RHEL_V4 which will work without > the need of the locking service ..... > some body correct me if that is not right. > > but some body told me that i can use the third lock server as > logical not physical server , he meant that i dont need > another HardWare like a server > > but my question now is there any body on our list can explain > me the use of the third server as logical way , is it as unix > and hp-ux as an area of the disk used for locking or what? > > can we manage one of the two cpu on the server as virtual to > get a solution without another server. > > Sorry for the long Email , but we have this last problem. > > > > Regards > ------------------------------------------------- > > Yazan > --------------------------- > > ----- Original Message ----- > From: "haydar Ali" <haydar2906@xxxxxxxxxxx> > To: <linux-cluster@xxxxxxxxxx> > Sent: Tuesday, July 05, 2005 6:22 PM > Subject: Re: GFS installation > > > > Hi Igor, > > > > Thanks for this URL. > > My question is: Have I to use 3 nodes to achieve GFS solution? > > We have 2 servers HP Proliant 380 G3 (RedHat Advanced > Server 2.1) attached > > by 2 fiber channels each to the storage area network SAN HP > MSA1000 and we > > want to install and configure GFS to allow 2 servers to > simultaneously > > read and write to a single shared file system (Word > documents located into > > /u04) located on the Storage area network SAN HP MSA1000. > > I read the example that you have sent to me and I see 3 > nodes, 2 client > > nodes share a directory mounted on the 3d server node, but > our solution > > the directory is located in the SAN. > > > > Have you any explanation or ideas for our request? > > Thanks > > > > Haydar > > > > > >>From: Igor <logastellus@xxxxxxxxx> > >>Reply-To: linux clustering <linux-cluster@xxxxxxxxxx> > >>To: linux clustering <linux-cluster@xxxxxxxxxx> > >>Subject: Re: GFS installation > >>Date: Thu, 16 Jun 2005 08:33:16 -0700 (PDT) > >> > >>Look at this URL that David suggested to me: > >> > >>http://sources.redhat.com/cgi-bin/cvsweb.cgi/cluster/doc/min -gfs.txt?rev=1.3&content-type=text/x-cvsweb-markup&cvsroot=cluster >> >>it's pretty good. >> >>--- haydar Ali <haydar2906@xxxxxxxxxxx> wrote: >> >> > Hi, >> > >> > I'm looking for an installing and configuring >> > procedure for GFS (examples). >> > We have 2 servers HP Proliant 380 G3 (RedHat >> > Advanced Server 2.1) attached >> > by fiber optic to the storage area network SAN HP >> > MSA1000 and we want to >> > install and configure GFS to allow 2 servers to >> > simultaneously read and >> > write to a single shared file system (Word documents >> > located into /u04) on >> > the SAN. >> > >> > Thanks. >> > >> > Haydar >> > >> > >> > -- >> > >> > Linux-cluster@xxxxxxxxxx >> > http://www.redhat.com/mailman/listinfo/linux-cluster >> > >> >> >> >> >>__________________________________ >>Discover Yahoo! >>Stay in touch with email, IM, photo sharing and more. Check it out! >>http://discover.yahoo.com/stayintouch.html >> >>-- >> >>Linux-cluster@xxxxxxxxxx >>http://www.redhat.com/mailman/listinfo/linux-cluster > > > -- > > Linux-cluster@xxxxxxxxxx > http://www.redhat.com/mailman/listinfo/linux-cluster -- Linux-cluster@xxxxxxxxxx http://www.redhat.com/mailman/listinfo/linux-cluster -- Linux-cluster@xxxxxxxxxx http://www.redhat.com/mailman/listinfo/linux-cluster