Re: A few more newbie questions about gulm

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Fri, 24 Jun 2005, Stanislav Sedov wrote:
You'd rather not use cman with gulm. It works better without it, as it can manage cluster membership, fencing, etc.

How do I configure fencing with it? Do I still used cluster.conf?

Gulm use odd number of master servers and local servers for ecah node. So you must run lock_gulmd -s master_server1, master_server2, master_server3,etc -n tablename --name nodename. Cman is not needed.

Gave this a shot:

lock_gulmd -s xen1.int.technicality.org -n xencluster --name xen1.int.technicality.org

Here's the errors I'm getting:

Jun 24 14:32:27 xen1 lock_gulmd_main[2918]: Forked lock_gulmd_core.
Jun 24 14:32:27 xen1 lock_gulmd_core[2920]: Starting lock_gulmd_core DEVEL.1119559445. (built Jun 23 2005 15:45:06) Copyright (C) 2004 Red Hat, Inc.  All rights reserved.
Jun 24 14:32:27 xen1 lock_gulmd_core[2920]: I am running in Standard mode.
Jun 24 14:32:27 xen1 lock_gulmd_core[2920]: I am (xen1.int.technicality.org) with ip (::ffff:10.20.0.201)
Jun 24 14:32:27 xen1 lock_gulmd_core[2920]: This is cluster xencluster
Jun 24 14:32:27 xen1 lock_gulmd_core[2920]: I see no Masters, So I am becoming the Master.
Jun 24 14:32:27 xen1 lock_gulmd_core[2920]: Could not send quorum update to slave xen1.int.technicality.org
Jun 24 14:32:27 xen1 lock_gulmd_core[2920]: New generation of server state. (1119641547702631)
Jun 24 14:32:28 xen1 lock_gulmd_main[2918]: Forked lock_gulmd_LT.
Jun 24 14:32:28 xen1 lock_gulmd_LT[2922]: Starting lock_gulmd_LT DEVEL.1119559445. (built Jun 23 2005 15:45:06) Copyright (C) 2004 Red Hat, Inc.  All rights reserved.
Jun 24 14:32:28 xen1 lock_gulmd_LT[2922]: I am running in Standard mode.
Jun 24 14:32:28 xen1 lock_gulmd_LT[2922]: I am (xen1.int.technicality.org) with ip (::ffff:10.20.0.201)
Jun 24 14:32:28 xen1 lock_gulmd_LT[2922]: This is cluster xencluster
Jun 24 14:32:28 xen1 lock_gulmd_core[2920]: ERROR [src/core_io.c:1243] Failed to recv all of the service login packet. -1:Unknown error 4294967295
Jun 24 14:32:28 xen1 lock_gulmd_LT000[2922]: ERROR [src/lock_io.c:576] Failed to receive login reply. -104:104:Connection reset by peer
Jun 24 14:32:29 xen1 lock_gulmd_main[2918]: Forked lock_gulmd_LTPX.
Jun 24 14:32:29 xen1 lock_gulmd_LTPX[2924]: Starting lock_gulmd_LTPX DEVEL.1119559445. (built Jun 23 2005 15:45:06) Copyright (C) 2004 Red Hat, Inc.  All rights reserved.
Jun 24 14:32:29 xen1 lock_gulmd_LTPX[2924]: I am running in Standard mode.
Jun 24 14:32:29 xen1 lock_gulmd_LTPX[2924]: I am (xen1.int.technicality.org) with ip (::ffff:10.20.0.201)
Jun 24 14:32:29 xen1 lock_gulmd_LTPX[2924]: This is cluster xencluster
Jun 24 14:32:29 xen1 lock_gulmd_core[2920]: ERROR [src/core_io.c:1243] Failed to recv all of the service login packet. -1:Unknown error 4294967295
Jun 24 14:32:29 xen1 lock_gulmd_LTPX[2924]: ERROR [src/ltpx_io.c:487] Failed to receive login reply. -104:104:Connection reset by peer

That normal?  :)

------------------------------------------------------------------------
| nate carlson | natecars@xxxxxxxxxxxxxxx | http://www.natecarlson.com |
|       depriving some poor village of its idiot since 1981            |
------------------------------------------------------------------------

--

Linux-cluster@xxxxxxxxxx
http://www.redhat.com/mailman/listinfo/linux-cluster

[Index of Archives]     [Corosync Cluster Engine]     [GFS]     [Linux Virtualization]     [Centos Virtualization]     [Centos]     [Linux RAID]     [Fedora Users]     [Fedora SELinux]     [Big List of Linux Books]     [Yosemite Camping]

  Powered by Linux