I have increased the number of journals on each FS from the 8 previously discussed to 16. At present the max number of servers that I intend on using will be 10, leaving 6 journals open for expansion. When trying to mount the 6th server, I end up with the same problem. The mount hangs until I remove a mount from another system before the 6th system is able to complete the GFS mount. After adding on full verbosity on the gulm lock server I still can't see anything of interest. Dmesg offers no clue either. The only message I'm seeing is the following : "GFS Kernel Interface" is logged out. fd:10" etc. After searching the net, the only advice I can find is to increase the number of journals to at least the number of nodes. This I have done with no success.... Any other ideas? -- Regards Richard Mayhew Unix Specialist -----Original Message----- From: Richard Mayhew [mailto:rmayhew@xxxxxxxx] Sent: 18 August 2004 04:40 PM To: Derek Anderson Cc: linux-cluster@xxxxxxxxxx Subject: RE: [Linux-cluster] GFS Node Limit? I have 4 mounts of 50GB each. Each mount has 8 Journals...(this amount I found in the manual somewhere) Is this a prob? Do you have a recommended FS layout etc? -- Regards Richard Mayhew Unix Specialist -----Original Message----- From: Derek Anderson [mailto:danderso@xxxxxxxxxx] Sent: 18 August 2004 04:31 PM To: Discussion of clustering software components including GFS; Richard Mayhew Subject: Re: [Linux-cluster] GFS Node Limit? How many journals on your filesystem? On Wednesday 18 August 2004 09:17, Richard Mayhew wrote: > Hi, > > I have 4 gulm_lock servers setup and 6 gulm_lock clients. > > I can mount the GFS file systems on all the lock servers and only 1 > client. When I try and add another client (which is specified in the > nodes list) it logs in to the master gulm lock server with no problems. > When I try mount the gfs file systems it hangs, until I unmount the > file system from another client. Its as if there is a max of either 1 > Client or a total of 5 servers/clients that can mount the GFS > FS;s.......Doesn't make sense... > > Any ideas? > > > -- > > Regards > > Richard Mayhew > Unix Specialist > > > > -- > > Linux-cluster@xxxxxxxxxx > http://www.redhat.com/mailman/listinfo/linux-cluster -- Linux-cluster@xxxxxxxxxx http://www.redhat.com/mailman/listinfo/linux-cluster