Re: question about rebooting master server

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Sat, 2006-05-13 at 15:07 -0400, Jason wrote:
> so I have both servers tf1, and tf2 connected to shared storage Dell 220S with 6.0.2.
> They both seem to access the shared storage fine, but if I reboot the node thats the master,
> the slave cannot access the shared storage until the master comes back up..
> heres the info from the logs.

> 
> is this normal? I would assume that when the master was rebooted, the other node should still be able to 
> access the storage with no problems.
> 
Yes it is normal.  The gulm lock manager requires a minimum of 3 nodes
in order to be able to determine who is master.  With only two nodes
running and you lose one, the remaining node has no way to determine
that you are not in a split brain situation.  So, the lock manager waits
until quorum is restored.  For a two node cluster, you need to be
running the GFS 6.1 and DLM for a lock manager on a 2.6 kernel.

Kevin

--

Linux-cluster@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/linux-cluster

[Index of Archives]     [Corosync Cluster Engine]     [GFS]     [Linux Virtualization]     [Centos Virtualization]     [Centos]     [Linux RAID]     [Fedora Users]     [Fedora SELinux]     [Big List of Linux Books]     [Yosemite Camping]

  Powered by Linux