Re: [Linux-cluster] Problem with RHEL3 and GFS-6.0.0.10, Kernel Panic

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thu, Sep 16, 2004 at 11:05:02AM +0100, Paulo Sousa wrote:
>                I' testing the GFS in RHEL3 but I have some problems.
> 
>                 I have 2 servers connect to a shared SCSI storage and one
>    of  the  serves  is  the  lock server (I don't have redundancy at this
>    moment to lock_server, it is just for testing)
> 
>                Server1 (mount gfs filesystem + lock_server)
>                Server2 (mount gfs filessystem)
> 
>                This is the test I have made in the server 1
> 
>                /etc/init.d/lock_gulmd stop

You have a single lock server.  This is where all of the lock state is
stored.  The lock state is what keeps the different nodes mounting gfs
from corrupting data.  You have no redundancy in the lock state.  You
stopped the lock server.  The lock state was lost.  The cluster cannot
continue.  The nodes killed themselves rather than let the filesystem
meta data get corrupted.

If you want to be able to stop lock servers, you MUST have redundancy in
the lock servers.  For gulm this means you need three nodes.


-- 
Michael Conrad Tadpol Tilstra
Gravity is a myth, the Earth sucks.

Attachment: pgp0N9751Mnzq.pgp
Description: PGP signature


[Index of Archives]     [Corosync Cluster Engine]     [GFS]     [Linux Virtualization]     [Centos Virtualization]     [Centos]     [Linux RAID]     [Fedora Users]     [Fedora SELinux]     [Big List of Linux Books]     [Yosemite Camping]

  Powered by Linux