All, I have a question regarding the latest implementation of GFS 6.0 with RedHat Linux 3.0 Enterprise. What my company has going on is this: We have a SAN project coming up but we do not have the SAN or a similar type shared storage device available. We have the node machines on hand and are trying to work through the GFS implementation as we are new to GFS (We have run RedHat Linux since version 5.0 and all the flavors in between.). We have tried to simulate the SAN environment by utilizing the GNBD software and we have followed the instructions available at: http://www.redhat.com/docs/manuals/csgfs/admin-guide/s1-ex-slm-ext-gnbd.html This is the LOCK_GULM, SLM External, and GNBD example of GFS. We have not had any problems getting the shared devices, pools, and filesystems created as followed in the documentation. What we have happening is that when we mount the GFS filesystem on one node and then we try and mount the filesystem on the second node, the second node will hang when issued the command to mount the filesystem. No errors are reported on console or in logs. No errors are reported in the Lock Server either. Everything appears to be working correctly as the log information for both machines at that instant are the same, ie, the one that is hung, has the same log messages as the one that is not hung. When I go to node one, with node two still trying to mount the filesystem, and unmount the filesystem, node two immediately finishes the mount command and everything is fine with node two. However, when trying to mount node one again, it just hangs and so on. It is only allowing one node to mount the filesystem at once. The configuration files are all the generic examples given in the documentation with the fencing mechanism as GNBD (Tried fencing with manual and the same situation exists with that method.). I can provide all configuration files and also give log file information if this problem isn’t something that experienced GFS users know what the problem may be. Thank you all for your time. Robert Fidelity Communications |