On Tue, 2005-04-19 at 20:47 +0200, birger wrote: > Regarding lockd, I think my solution is valid given the 2 restraints: > - The cluster nodes should not be NFS clients (and thanks to GFS I don't > need that) > - There should only be one NFS service running on any cluster node. And I > only have one NFS service. Ah, ok, this might work then. I've never tested anything quite like it. One thing to note is that when you take an NFS lock on GFS file systems, the lock will exist on the other cluster node too. This is because NFS can export the same GFS filesystem on multiple nodes. I'm not sure what would happen during the lock-reclaim grace period if you tried to relocate a service while a client still had locks (since the locks would exist on both nodes...). Ken, any idea on this? > When I set the name for statd to the name of the service IP address and > relocate the status dir to a cluster disk, a takeover should behave just > like a server reboot, shouldn't it? In principle, that's all a failover/relocation should ever look like to clients. > My cluster only has one node (even if I have defined 2 nodes). I have to get > the first node production ready and migrate everything over first. Then make > the old file server a second cluster node. > > I'll have a look around and see if I can find a solution. Ok, it could be a bug in the status code when looking for NIS exports. I'll try to take a look at it this week. -- Lon -- Linux-cluster@xxxxxxxxxx http://www.redhat.com/mailman/listinfo/linux-cluster