[Linux-cluster] It fails in the start of lock_gulmd because of the redundant configuration of lockserver.

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Dear Sir

I am constructing GFS with three nodes (gfslocksv,gfsnodea,gfsnodeb).

The kernel version of gfslocksv is 2.4.21-20EL, and GFS-6.0.0-15 is
installed.
The kernel version of gfsnodea and gfsnodeb is 2.4.21-20ELsmp, and
GFS-6.0.0-15 is installed.

gfslocksv is a mastering lock server .
gfsnodea and gfsnodeb are the slave lock servers.
and, gfsnodea and gfsnodeb mount the filesystem.

However, even if gfsnodea and gfsnodeb are booted after gfslocksv is booted,
the filesystem is not mount.
Then, the filesystem can mount by restart lock_gulmd.

However, when gfsnodea and gfsnodeb are booted again, the filesystem is not
still mount.
In addition, fence is executed.

How should I do?

Regards

------------------------------------------------------
Shirai Noriyuki
Chief Engineer Technical Div. System Create Inc
Ishkawa 2nd Bldg 1-10-8 Kajicho
Chiyodaku Tokyo 101-0044 Japan
Tel81-3-5296-3775 Fax81-3-5296-3777
e-mail:shirai@xxxxxxxxxx web:http://www.sc-i.co.jp
------------------------------------------------------





[Index of Archives]     [Corosync Cluster Engine]     [GFS]     [Linux Virtualization]     [Centos Virtualization]     [Centos]     [Linux RAID]     [Fedora Users]     [Fedora SELinux]     [Big List of Linux Books]     [Yosemite Camping]

  Powered by Linux