Re: Question about cluster behavior

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Yes, I know that if I have only 1 machine active the cluster won't start
with this configuration.
But this is not my problem. My problem is that the cluster was not started
even with 2 machines and the quorum disk active.
There isn't any cluster.log in /var/log or in /var/log/cluster. Anyway, as
you can see, there isn't ANY log line in all cluster log files between
8:30 and 9:00 of 11th february. Only /var/log/messages has some logs
during this period.
(Don't consider the errors between 8:00 and 8:30 because the storage was
not active. At about 9:00 we started the third machine so after this hour
it was all ok.)

Fabio Ferrari

rgmanager.log
Feb 07 17:41:23 rgmanager Exiting
Feb 11 08:13:49 rgmanager Waiting for CMAN to start
Feb 11 09:02:27 rgmanager Disconnecting from CMAN

corosync.log
Feb 07 17:41:52 corosync [MAIN  ] Corosync Cluster Engine exiting with
status 0 at main.c:1947.)
Feb 11 08:12:42 corosync [MAIN  ] Corosync Cluster Engine ('1.4.1'):
started and ready to provide service.
Feb 11 08:12:42 corosync [MAIN  ] Corosync built-in features: nss dbus
rdma snmp
Feb 11 08:12:42 corosync [MAIN  ] Successfully read config from
/etc/cluster/cluster.conf
Feb 11 08:12:42 corosync [MAIN  ] Successfully parsed cman config
Feb 11 08:12:42 corosync [TOTEM ] Initializing transport (UDP/IP Multicast).
Feb 11 08:12:42 corosync [TOTEM ] Initializing transmit/receive security:
libtomcrypt SOBER128/SHA1HMAC (mode 0).
Feb 11 08:12:42 corosync [TOTEM ] The network interface [155.185.135.21]
is now up.
Feb 11 08:12:42 corosync [QUORUM] Using quorum provider quorum_cman
Feb 11 08:12:42 corosync [SERV  ] Service engine loaded: corosync cluster
quorum service v0.1
Feb 11 08:12:42 corosync [CMAN  ] CMAN 3.0.12.1 (built Dec  9 2013
10:48:35) started
Feb 11 08:12:42 corosync [SERV  ] Service engine loaded: corosync CMAN
membership service 2.90
Feb 11 08:12:42 corosync [SERV  ] Service engine loaded: openais
checkpoint service B.01.01
Feb 11 08:12:42 corosync [SERV  ] Service engine loaded: corosync extended
virtual synchrony service
Feb 11 08:12:42 corosync [SERV  ] Service engine loaded: corosync
configuration service
Feb 11 08:12:42 corosync [SERV  ] Service engine loaded: corosync cluster
closed process group service v1.01
Feb 11 08:12:42 corosync [SERV  ] Service engine loaded: corosync cluster
config database access v1.01
Feb 11 08:12:42 corosync [SERV  ] Service engine loaded: corosync profile
loading service
Feb 11 08:12:42 corosync [QUORUM] Using quorum provider quorum_cman
Feb 11 08:12:42 corosync [SERV  ] Service engine loaded: corosync cluster
quorum service v0.1
Feb 11 08:12:42 corosync [MAIN  ] Compatibility mode set to whitetank. 
Using V1 and V2 of the synchronization engine.
Feb 11 08:12:42 corosync [TOTEM ] A processor joined or left the
membership and a new membership was formed.
Feb 11 08:12:42 corosync [QUORUM] Members[1]: 1
Feb 11 08:12:42 corosync [QUORUM] Members[1]: 1
Feb 11 08:12:42 corosync [CPG   ] chosen downlist: sender r(0)
ip(155.185.135.21) ; members(old:0 left:0)
Feb 11 08:12:42 corosync [MAIN  ] Completed service synchronization, ready
to provide service.
Feb 11 08:12:48 corosync [SERV  ] Unloading all Corosync service engines.
Feb 11 08:12:48 corosync [SERV  ] Service engine unloaded: corosync
extended virtual synchrony service
Feb 11 08:12:48 corosync [SERV  ] Service engine unloaded: corosync
configuration service
Feb 11 08:12:48 corosync [SERV  ] Service engine unloaded: corosync
cluster closed process group service v1.01
Feb 11 08:12:48 corosync [SERV  ] Service engine unloaded: corosync
cluster config database access v1.01
Feb 11 08:12:48 corosync [SERV  ] Service engine unloaded: corosync
profile loading service
Feb 11 08:12:48 corosync [SERV  ] Service engine unloaded: openais
checkpoint service B.01.01
Feb 11 08:12:48 corosync [SERV  ] Service engine unloaded: corosync CMAN
membership service 2.90
Feb 11 08:12:48 corosync [SERV  ] Service engine unloaded: corosync
cluster quorum service v0.1
Feb 11 08:12:48 corosync [MAIN  ] Corosync Cluster Engine exiting with
status 0 at main.c:1947.
Feb 11 09:05:04 corosync [MAIN  ] Corosync Cluster Engine ('1.4.1'):
started and ready to provide service.

dlm_controld.log
Feb 03 12:29:55 dlm_controld dlm_controld 3.0.12.1 started
Feb 11 09:05:18 dlm_controld dlm_controld 3.0.12.1 started

fenced.log
Feb 03 13:45:58 fenced node_history_cluster_remove no nodeid 0
Feb 11 09:05:18 fenced fenced 3.0.12.1 started

qdiskd.log
Feb 07 17:41:52 qdiskd Unregistering quorum device.
Feb 11 08:12:46 qdiskd Unable to match label 'mail-qdisk' to any device
Feb 11 09:05:08 qdiskd Quorum Partition: /dev/block/253:0 Label: mail-qdisk

messages.log
Feb 11 08:16:26 eta ntpd[2667]: 0.0.0.0 c618 08 no_sys_peer
Feb 11 08:21:04 eta ricci[4712]: Executing '/usr/bin/virsh nodeinfo'
Feb 11 08:21:05 eta ricci[4714]: Executing
'/usr/libexec/ricci/ricci-worker -f /var/lib/ricci/queue/415378025'
Feb 11 08:21:05 eta ricci[4717]: Executing
'/usr/libexec/ricci/ricci-worker -f /var/lib/ricci/queue/1468491627'
Feb 11 08:21:09 eta ricci[4722]: Executing '/usr/bin/virsh nodeinfo'
Feb 11 08:21:09 eta ricci[4724]: Executing
'/usr/libexec/ricci/ricci-worker -f /var/lib/ricci/queue/290731729'
Feb 11 08:21:09 eta ricci[4727]: Executing
'/usr/libexec/ricci/ricci-worker -f /var/lib/ricci/queue/1318243426'
Feb 11 08:21:20 eta ricci[4733]: Executing '/usr/bin/virsh nodeinfo'
Feb 11 08:21:20 eta ricci[4735]: Executing
'/usr/libexec/ricci/ricci-worker -f /var/lib/ricci/queue/676971068'
Feb 11 08:21:21 eta ricci[4738]: Executing
'/usr/libexec/ricci/ricci-worker -f /var/lib/ricci/queue/1356213597'
Feb 11 08:21:22 eta ricci[4743]: Executing '/usr/bin/virsh nodeinfo'
Feb 11 08:21:22 eta ricci[4745]: Executing
'/usr/libexec/ricci/ricci-worker -f /var/lib/ricci/queue/82953350'
Feb 11 08:21:23 eta ricci[4748]: Executing
'/usr/libexec/ricci/ricci-worker -f /var/lib/ricci/queue/1314308023'
Feb 11 08:21:24 eta ricci[4752]: Executing '/usr/bin/virsh nodeinfo'
Feb 11 08:21:24 eta ricci[4754]: Executing
'/usr/libexec/ricci/ricci-worker -f /var/lib/ricci/queue/835292354'
Feb 11 08:21:25 eta ricci[4757]: Executing
'/usr/libexec/ricci/ricci-worker -f /var/lib/ricci/queue/1199640582'
Feb 11 08:21:39 eta ricci[4764]: Executing '/usr/bin/virsh nodeinfo'
Feb 11 08:21:39 eta ricci[4766]: Executing
'/usr/libexec/ricci/ricci-worker -f /var/lib/ricci/queue/617299274'
Feb 11 08:21:39 eta ricci[4769]: Executing
'/usr/libexec/ricci/ricci-worker -f /var/lib/ricci/queue/2029685252'
Feb 11 08:21:44 eta ricci[4774]: Executing '/usr/bin/virsh nodeinfo'
Feb 11 08:21:44 eta ricci[4776]: Executing
'/usr/libexec/ricci/ricci-worker -f /var/lib/ricci/queue/593330091'
Feb 11 08:21:44 eta ricci[4779]: Executing
'/usr/libexec/ricci/ricci-worker -f /var/lib/ricci/queue/1643272558'
Feb 11 08:21:47 eta ricci[4784]: Executing '/usr/bin/virsh nodeinfo'
Feb 11 08:21:47 eta ricci[4786]: Executing
'/usr/libexec/ricci/ricci-worker -f /var/lib/ricci/queue/352604825'
Feb 11 08:21:47 eta ricci[4789]: Executing
'/usr/libexec/ricci/ricci-worker -f /var/lib/ricci/queue/1877167955'
Feb 11 08:23:17 eta clamd[2675]: No stats for Database check - forcing reload
Feb 11 08:23:18 eta clamd[2675]: Reading databases from /var/lib/clamav
Feb 11 08:23:23 eta clamd[2675]: Database correctly reloaded (3107399
signatures)
Feb 11 08:30:19 eta ricci[4924]: Executing '/usr/bin/virsh nodeinfo'
Feb 11 08:30:19 eta ricci[4926]: Executing
'/usr/libexec/ricci/ricci-worker -f /var/lib/ricci/queue/1103385483'
Feb 11 08:30:19 eta ricci[4929]: Executing
'/usr/libexec/ricci/ricci-worker -f /var/lib/ricci/queue/1664807129'
Feb 11 08:32:05 eta ntpd[2667]: 0.0.0.0 c612 02 freq_set kernel 78.308 PPM
Feb 11 08:32:05 eta ntpd[2667]: 0.0.0.0 c615 05 clock_sync
Feb 11 08:33:24 eta clamd[2675]: SelfCheck: Database status OK.
Feb 11 08:40:38 eta kernel: scsi 3:0:2:0: Direct-Access     DGC      VRAID
           0532 PQ: 0 ANSI: 4
Feb 11 08:40:38 eta kernel: sd 3:0:2:0: Attached scsi generic sg4 type 0
Feb 11 08:40:38 eta kernel: sd 3:0:2:0: [sdc] 4194304 512-byte logical
blocks: (2.14 GB/2.00 GiB)
Feb 11 08:40:38 eta kernel: sd 3:0:2:0: [sdc] Write Protect is off
Feb 11 08:40:38 eta kernel: scsi 3:0:2:1: Direct-Access     DGC      VRAID
           0532 PQ: 0 ANSI: 4
Feb 11 08:40:38 eta kernel: sd 3:0:2:0: [sdc] Write cache: disabled, read
cache: enabled, doesn't support DPO or FUA
Feb 11 08:40:38 eta kernel: sd 3:0:2:1: Attached scsi generic sg5 type 0
Feb 11 08:40:38 eta kernel: sd 3:0:2:1: [sdd] Very big device. Trying to
use READ CAPACITY(16).
Feb 11 08:40:38 eta kernel: sd 3:0:2:1: [sdd] 6442450944 512-byte logical
blocks: (3.29 TB/3.00 TiB)
Feb 11 08:40:38 eta kernel: sd 3:0:2:1: [sdd] Write Protect is off
Feb 11 08:40:38 eta kernel: sdc:
Feb 11 08:40:38 eta kernel: sd 3:0:2:1: [sdd] Write cache: disabled, read
cache: enabled, doesn't support DPO or FUA
Feb 11 08:40:38 eta kernel: unknown partition table
Feb 11 08:40:38 eta kernel: sd 3:0:2:1: [sdd] Very big device. Trying to
use READ CAPACITY(16).
Feb 11 08:40:38 eta kernel: sdd:
Feb 11 08:40:38 eta kernel: sd 3:0:2:0: [sdc] Attached SCSI disk
Feb 11 08:40:38 eta kernel: unknown partition table
Feb 11 08:40:38 eta kernel: sd 3:0:2:1: [sdd] Very big device. Trying to
use READ CAPACITY(16).
Feb 11 08:40:38 eta kernel: sd 3:0:2:1: [sdd] Attached SCSI disk
Feb 11 08:40:38 eta multipathd: sdd: add path (uevent)
Feb 11 08:40:38 eta multipathd: mpathb: event checker started
Feb 11 08:40:38 eta multipathd: sdd [8:48]: path added to devmap mpathb
Feb 11 08:40:38 eta multipathd: sdc: add path (uevent)
Feb 11 08:40:38 eta kernel: device-mapper: multipath round-robin: version
1.0.0 loaded
Feb 11 08:40:38 eta multipathd: mpatha: load table [0 4194304 multipath 1
queue_if_no_path 0 1 1 round-robin 0 1 1 8:32 1]
Feb 11 08:40:38 eta multipathd: mpatha: event checker started
Feb 11 08:40:38 eta multipathd: sdc [8:32]: path added to devmap mpatha
Feb 11 08:40:38 eta kernel: scsi 4:0:1:0: Direct-Access     DGC      VRAID
           0532 PQ: 0 ANSI: 4
Feb 11 08:40:38 eta kernel: sd 4:0:1:0: Attached scsi generic sg6 type 0
Feb 11 08:40:38 eta kernel: sd 4:0:1:0: [sde] 4194304 512-byte logical
blocks: (2.14 GB/2.00 GiB)
Feb 11 08:40:38 eta kernel: sd 4:0:1:0: [sde] Write Protect is off
Feb 11 08:40:38 eta kernel: sd 4:0:1:0: [sde] Write cache: disabled, read
cache: enabled, doesn't support DPO or FUA
Feb 11 08:40:38 eta kernel: scsi 4:0:1:1: Direct-Access     DGC      VRAID
           0532 PQ: 0 ANSI: 4
Feb 11 08:40:38 eta kernel: sd 4:0:1:1: Attached scsi generic sg7 type 0
Feb 11 08:40:38 eta kernel: sd 4:0:1:1: [sdf] Very big device. Trying to
use READ CAPACITY(16).
Feb 11 08:40:38 eta kernel: sd 4:0:1:1: [sdf] 6442450944 512-byte logical
blocks: (3.29 TB/3.00 TiB)
Feb 11 08:40:38 eta kernel: sde:
Feb 11 08:40:38 eta kernel: sd 4:0:1:1: [sdf] Write Protect is off
Feb 11 08:40:38 eta kernel: sd 4:0:1:1: [sdf] Write cache: disabled, read
cache: enabled, doesn't support DPO or FUA
Feb 11 08:40:38 eta kernel: sd 4:0:1:1: [sdf] Very big device. Trying to
use READ CAPACITY(16).
Feb 11 08:40:38 eta kernel: sdf: unknown partition table
Feb 11 08:40:38 eta kernel: unknown partition table
Feb 11 08:40:38 eta kernel: sd 4:0:1:0: [sde] Attached SCSI disk
Feb 11 08:40:38 eta kernel: sd 4:0:1:1: [sdf] Very big device. Trying to
use READ CAPACITY(16).
Feb 11 08:40:38 eta kernel: sd 4:0:1:1: [sdf] Attached SCSI disk
Feb 11 08:40:38 eta multipathd: sdf: add path (uevent)
Feb 11 08:40:38 eta multipathd: mpathb: load table [0 6442450944 multipath
1 queue_if_no_path 0 2 1 round-robin 0 1 1 8:48 1 round-robin 0 1 1 8:80
1]
Feb 11 08:40:38 eta multipathd: sdf [8:80]: path added to devmap mpathb
Feb 11 08:40:38 eta multipathd: sde: add path (uevent)
Feb 11 08:40:38 eta multipathd: mpatha: load table [0 4194304 multipath 1
queue_if_no_path 0 2 1 round-robin 0 1 1 8:32 1 round-robin 0 1 1 8:64 1]
Feb 11 08:40:38 eta multipathd: sde [8:64]: path added to devmap mpatha
Feb 11 08:43:24 eta clamd[2675]: SelfCheck: Database status OK.
Feb 11 08:46:39 eta kernel: scsi 4:0:2:0: Direct-Access     DGC      VRAID
           0532 PQ: 0 ANSI: 4
Feb 11 08:46:39 eta kernel: sd 4:0:2:0: Attached scsi generic sg8 type 0
Feb 11 08:46:39 eta kernel: scsi 4:0:2:1: Direct-Access     DGC      VRAID
           0532 PQ: 0 ANSI: 4
Feb 11 08:46:39 eta kernel: sd 4:0:2:0: [sdg] 4194304 512-byte logical
blocks: (2.14 GB/2.00 GiB)
Feb 11 08:46:39 eta kernel: sd 4:0:2:0: [sdg] Write Protect is off
Feb 11 08:46:39 eta kernel: sd 4:0:2:1: Attached scsi generic sg9 type 0
Feb 11 08:46:39 eta kernel: sd 4:0:2:0: [sdg] Write cache: disabled, read
cache: enabled, doesn't support DPO or FUA
Feb 11 08:46:39 eta kernel: sdg:
Feb 11 08:46:39 eta kernel: sd 4:0:2:1: Warning! Received an indication
that the LUN reached a thin provisioning soft threshold.
Feb 11 08:46:39 eta kernel: sd 4:0:2:1: [sdh] Very big device. Trying to
use READ CAPACITY(16).
Feb 11 08:46:39 eta kernel: sd 4:0:2:1: [sdh] 6442450944 512-byte logical
blocks: (3.29 TB/3.00 TiB)
Feb 11 08:46:39 eta kernel: sd 4:0:2:1: [sdh] Write Protect is off
Feb 11 08:46:39 eta kernel: unknown partition table
Feb 11 08:46:39 eta kernel: sd 4:0:2:1: [sdh] Write cache: disabled, read
cache: enabled, doesn't support DPO or FUA
Feb 11 08:46:39 eta kernel: sd 4:0:2:1: [sdh] Very big device. Trying to
use READ CAPACITY(16).
Feb 11 08:46:39 eta kernel: sdh:
Feb 11 08:46:39 eta kernel: sd 4:0:2:0: [sdg] Attached SCSI disk
Feb 11 08:46:39 eta kernel: unknown partition table
Feb 11 08:46:39 eta kernel: sd 4:0:2:1: [sdh] Very big device. Trying to
use READ CAPACITY(16).
Feb 11 08:46:39 eta kernel: sd 4:0:2:1: [sdh] Attached SCSI disk
Feb 11 08:46:39 eta multipathd: sdg: add path (uevent)
Feb 11 08:46:39 eta multipathd: mpatha: load table [0 4194304 multipath 1
queue_if_no_path 0 3 1 round-robin 0 1 1 8:32 1 round-robin 0 1 1 8:64 1
round-robin 0 1 1 8:96 1]
Feb 11 08:46:39 eta multipathd: sdg [8:96]: path added to devmap mpatha
Feb 11 08:46:39 eta multipathd: sdh: add path (uevent)
Feb 11 08:46:39 eta multipathd: mpathb: load table [0 6442450944 multipath
1 queue_if_no_path 0 3 1 round-robin 0 1 1 8:48 1 round-robin 0 1 1 8:80 1
round-robin 0 1 1 8:112 1]
Feb 11 08:46:39 eta multipathd: sdh [8:112]: path added to devmap mpathb
Feb 11 08:46:40 eta kernel: scsi 3:0:3:0: Direct-Access     DGC      VRAID
           0532 PQ: 0 ANSI: 4
Feb 11 08:46:40 eta kernel: sd 3:0:3:0: Attached scsi generic sg10 type 0
Feb 11 08:46:40 eta kernel: sd 3:0:3:0: [sdi] 4194304 512-byte logical
blocks: (2.14 GB/2.00 GiB)
Feb 11 08:46:40 eta kernel: sd 3:0:3:0: [sdi] Write Protect is off
Feb 11 08:46:40 eta kernel: scsi 3:0:3:1: Direct-Access     DGC      VRAID
           0532 PQ: 0 ANSI: 4
Feb 11 08:46:40 eta kernel: sd 3:0:3:0: [sdi] Write cache: disabled, read
cache: enabled, doesn't support DPO or FUA
Feb 11 08:46:40 eta kernel: sd 3:0:3:1: Attached scsi generic sg11 type 0
Feb 11 08:46:40 eta kernel: sd 3:0:3:1: [sdj] Very big device. Trying to
use READ CAPACITY(16).
Feb 11 08:46:40 eta kernel: sd 3:0:3:1: [sdj] 6442450944 512-byte logical
blocks: (3.29 TB/3.00 TiB)
Feb 11 08:46:40 eta kernel: sd 3:0:3:1: [sdj] Write Protect is off
Feb 11 08:46:40 eta kernel: sd 3:0:3:1: [sdj] Write cache: disabled, read
cache: enabled, doesn't support DPO or FUA
Feb 11 08:46:40 eta kernel: sdi:
Feb 11 08:46:40 eta kernel: sd 3:0:3:1: [sdj] Very big device. Trying to
use READ CAPACITY(16).
Feb 11 08:46:40 eta kernel: sdj: unknown partition table
Feb 11 08:46:40 eta kernel: unknown partition table
Feb 11 08:46:40 eta kernel: sd 3:0:3:0: [sdi] Attached SCSI disk
Feb 11 08:46:40 eta kernel: sd 3:0:3:1: [sdj] Very big device. Trying to
use READ CAPACITY(16).
Feb 11 08:46:40 eta kernel: sd 3:0:3:1: [sdj] Attached SCSI disk
Feb 11 08:46:40 eta multipathd: sdj: add path (uevent)
Feb 11 08:46:40 eta multipathd: mpathb: load table [0 6442450944 multipath
1 queue_if_no_path 0 4 1 round-robin 0 1 1 8:48 1 round-robin 0 1 1 8:80 1
round-robin 0 1 1 8:112 1 round-robin 0 1 1 8:144 1]
Feb 11 08:46:40 eta multipathd: sdj [8:144]: path added to devmap mpathb
Feb 11 08:46:40 eta multipathd: sdi: add path (uevent)
Feb 11 08:46:40 eta multipathd: mpatha: load table [0 4194304 multipath 1
queue_if_no_path 0 4 2 round-robin 0 1 1 8:32 1 round-robin 0 1 1 8:64 1
round-robin 0 1 1 8:96 1 round-robin 0 1 1 8:128 1]
Feb 11 08:46:40 eta multipathd: sdi [8:128]: path added to devmap mpatha
Feb 11 08:46:52 eta multipathd: mpatha: sdc - directio checker reports
path is up
Feb 11 08:46:52 eta multipathd: 8:32: reinstated
Feb 11 08:46:52 eta multipathd: mpatha: remaining active paths: 4
Feb 11 08:46:52 eta multipathd: mpatha: switch to path group #1
Feb 11 08:46:52 eta multipathd: mpatha: switch to path group #1
Feb 11 08:53:24 eta clamd[2675]: SelfCheck: Database status OK.
Feb 11 08:56:59 eta kernel: rport-4:0-3: blocked FC remote port time out:
removing rport
Feb 11 08:57:00 eta kernel: rport-3:0-4: blocked FC remote port time out:
removing rport
Feb 11 09:02:24 eta init: tty (/dev/tty1) main process (3815) killed by
TERM signal
Feb 11 09:02:24 eta init: tty (/dev/tty2) main process (3817) killed by
TERM signal
Feb 11 09:02:24 eta init: tty (/dev/tty3) main process (3819) killed by
TERM signal
Feb 11 09:02:24 eta init: tty (/dev/tty4) main process (3821) killed by
TERM signal
Feb 11 09:02:24 eta init: tty (/dev/tty5) main process (3823) killed by
TERM signal
Feb 11 09:02:24 eta init: tty (/dev/tty6) main process (3825) killed by
TERM signal
Feb 11 09:02:27 eta modclusterd: shutdown succeeded
Feb 11 09:02:27 eta rgmanager[3753]: Disconnecting from CMAN
Feb 11 09:02:27 eta rgmanager[3753]: Exiting
Feb 11 09:02:29 eta ricci: shutdown succeeded
Feb 11 09:02:29 eta oddjobd: oddjobd shutdown succeeded
Feb 11 09:02:31 eta dataeng: dsm_sa_eventmgrd shutdown succeeded
Feb 11 09:02:37 eta dataeng: dsm_sa_datamgrd shutdown succeeded
Feb 11 09:02:38 eta saslauthd[3652]: server_exit     : master exited: 3652
Feb 11 09:02:38 eta abrtd: Got signal 15, exiting
Feb 11 09:02:39 eta clamd[2675]: Pid file removed.
Feb 11 09:02:39 eta clamd[2675]: --- Stopped at Tue Feb 11 09:02:39 2014
Feb 11 09:02:39 eta clamd[2675]: Socket file removed.
Feb 11 09:02:39 eta snmpd[2647]: Received TERM or STOP signal...  shutting
down...
Feb 11 09:02:44 eta acpid: exiting
Feb 11 09:02:44 eta ntpd[2667]: ntpd exiting on signal 15
Feb 11 09:02:44 eta multipathd: mpathb: event checker exit
Feb 11 09:02:44 eta multipathd: dm-0: remove map (uevent)
Feb 11 09:02:44 eta multipathd: dm-0: devmap not registered, can't remove
Feb 11 09:02:44 eta multipathd: dm-0: remove map (uevent)
Feb 11 09:02:44 eta multipathd: dm-0: devmap not registered, can't remove
Feb 11 09:02:44 eta multipathd: mpatha: event checker exit
Feb 11 09:02:44 eta multipathd: dm-1: remove map (uevent)
Feb 11 09:02:44 eta multipathd: dm-1: devmap not registered, can't remove
Feb 11 09:02:44 eta multipathd: dm-1: remove map (uevent)
Feb 11 09:02:44 eta multipathd: dm-1: devmap not registered, can't remove
Feb 11 09:02:44 eta multipathd: --------shut down-------
Feb 11 09:02:44 eta init: Disconnected from system bus
Feb 11 09:02:44 eta rpcbind: rpcbind terminating on signal. Restart with
"rpcbind -w"
Feb 11 09:02:45 eta auditd[1747]: The audit daemon is exiting.
Feb 11 09:02:45 eta kernel: type=1305 audit(1392105765.102:235):
audit_pid=0 old=1747 auid=4294967295 ses=4294967295 res=1
Feb 11 09:02:45 eta kernel: type=1305 audit(1392105765.201:236):
audit_enabled=0 old=1 auid=4294967295 ses=4294967295 res=1
Feb 11 09:02:45 eta nslcd[1862]: caught signal SIGTERM (15), shutting down
Feb 11 09:02:45 eta nslcd[1862]: version 0.7.5 bailing out
Feb 11 09:02:45 eta kernel: Kernel logging (proc) stopped.
Feb 11 09:02:45 eta rsyslogd: [origin software="rsyslogd"
swVersion="5.8.10" x-pid="1875" x-info="http://www.rsyslog.com";] exiting
on signal 15.
Feb 11 09:05:01 eta kernel: imklog 5.8.10, log source = /proc/kmsg started.
Feb 11 09:05:01 eta rsyslogd: [origin software="rsyslogd"
swVersion="5.8.10" x-pid="2083" x-info="http://www.rsyslog.com";] start
Feb 11 09:05:01 eta kernel: Initializing cgroup subsys cpuset
Feb 11 09:05:01 eta kernel: Initializing cgroup subsys cpu
Feb 11 09:05:01 eta kernel: Linux version 2.6.32-431.3.1.el6.x86_64
(mockbuild@xxxxxxxxxxxxxxxxxxxxxxxxx) (gcc version 4.4.7 20120313 (Red Hat
4.4.7-4) (GCC) ) #1 SMP Fri Jan 3 21:39:27 UTC 2014
Feb 11 09:05:01 eta kernel: Command line: ro
root=UUID=0ee47a33-164b-4fad-9773-770aa314d56b LANG=it_IT.UTF-8 rd_NO_LUKS
 KEYBOARDTYPE=pc KEYTABLE=it rd_NO_MD SYSFONT=latarcyrheb-sun16
crashkernel=auto rd_NO_LVM rd_NO_DM rhgb quiet
Feb 11 09:05:01 eta kernel: KERNEL supported cpus:


> in this case your quorum should be provides 2 votes, in this case if two
> nodes dies and you want to continue with just one node {1(votes per node)
> +
> 2(quorum devices) = 3 votes of 5}, more than half
>
>
> 2014-02-14 18:34 GMT+01:00 Digimer <lists@xxxxxxxxxx>:
>
>> Replies in-line:
>>
>>
>> On 14/02/14 12:07 PM, FABIO FERRARI wrote:
>>
>>> So it's not a normal behavior, I guess.
>>>
>>> Here is my cluster.conf:
>>>
>>> <?xml version="1.0"?>
>>> <cluster config_version="59" name="mail">
>>>          <clusternodes>
>>>                  <clusternode name="eta.mngt.unimo.it" nodeid="1">
>>>                          <fence>
>>>                                  <method name="fence-eta">
>>>                                          <device name="fence-eta"/>
>>>                                  </method>
>>>                          </fence>
>>>                  </clusternode>
>>>                  <clusternode name="beta.mngt.unimo.it" nodeid="2">
>>>                          <fence>
>>>                                  <method name="fence-beta">
>>>                                          <device name="fence-beta"/>
>>>                                  </method>
>>>                          </fence>
>>>                  </clusternode>
>>>                  <clusternode name="guerro.mngt.unimo.it" nodeid="3">
>>>                          <fence>
>>>                                  <method name="fence-guerro">
>>>                                          <device name="fence-guerro"
>>> port="Guerro
>>> " ssl="on" uuid="4213f370-9572-63c7-26e4-22f0f43843aa"/>
>>>                                  </method>
>>>                          </fence>
>>>                  </clusternode>
>>>          </clusternodes>
>>>          <cman expected_votes="5"/>
>>>
>>
>> You generally don't need to set this, the cluster can calculate it.
>>
>>           <quorumd label="mail-qdisk"/>
>>>
>>
>> You don't set any votes, so the default is "1". So with expected votes
>> being 5, that means all three nodes have to be up or two nodes and
>> qdisk.
>>
>>
>>           <rm>
>>>                  <resources>
>>>                          <ip address="155.185.44.61/24"
>>> sleeptime="10"/>
>>>                          <mysql config_file="/etc/my.cnf"
>>> listen_address="155.185.44.61" name="mysql"
>>> shutdown_wait="10" startup_wait="10"/>
>>>                          <script file="/etc/init.d/httpd"
>>> name="httpd"/>
>>>                          <script file="/etc/init.d/postfix"
>>> name="postfix"/>
>>>                          <script file="/etc/init.d/dovecot"
>>> name="dovecot"/>
>>>                          <fs device="/dev/mapper/mailvg-maillv"
>>> force_fsck="1" force_unmount="1" fsid="58161"
>>> fstype="xfs" mountpoint="/cl" name="mailvg-maill
>>> v" options="defaults,noauto" self_fence="1"/>
>>>                          <lvm lv_name="maillv" name="lvm-mailvg-maillv"
>>> self_fence="1" vg_name="mailvg"/>
>>>                  </resources>
>>>                  <failoverdomains>
>>>                          <failoverdomain name="mailfailoverdomain"
>>> nofailback="1" ordered="1" restricted="1">
>>>                                  <failoverdomainnode
>>> name="eta.mngt.unimo.it" priority="1"/>
>>>                                  <failoverdomainnode
>>> name="beta.mngt.unimo.it" priority="2"/>
>>>                                  <failoverdomainnode
>>> name="guerro.mngt.unimo.it" priority="3"/>
>>>                          </failoverdomain>
>>>                  </failoverdomains>
>>>                  <service domain="mailfailoverdomain" max_restarts="3"
>>> name="mailservices" recovery="restart"
>>> restart_expire_time="600">
>>>                          <fs ref="mailvg-maillv">
>>>                                  <ip ref="155.185.44.61/24">
>>>                                          <mysql ref="mysql">
>>>                                                  <script ref="httpd"/>
>>>                                                  <script
>>> ref="postfix"/>
>>>                                                  <script
>>> ref="dovecot"/>
>>>                                          </mysql>
>>>                                  </ip>
>>>                          </fs>
>>>                  </service>
>>>          </rm>
>>>          <fencedevices>
>>>                  <fencedevice agent="fence_ipmilan" auth="password"
>>> ipaddr="155.185.135.105" lanplus="on" login="root"
>>> name="fence-eta" passwd="******" pr
>>> ivlvl="ADMINISTRATOR"/>
>>>                  <fencedevice agent="fence_ipmilan" auth="password"
>>> ipaddr="155.185.135.106" lanplus="on" login="root"
>>> name="fence-beta" passwd="******" p
>>> rivlvl="ADMINISTRATOR"/>
>>>                  <fencedevice agent="fence_vmware_soap"
>>> ipaddr="155.185.0.10" login="etabetaguerro"
>>> name="fence-guerro" passwd="******"/>
>>>          </fencedevices>
>>> </cluster>
>>>
>>> What log file do you need? There are many in /var/log/cluster..
>>>
>>
>> By default, /var/log/messages is the most useful. Checking 'cman_tool
>> status' and 'clustat' are also good.
>>
>> --
>> Digimer
>> Papers and Projects: https://alteeve.ca/w/
>> What if the cure for cancer is trapped in the mind of a person without
>> access to education?
>>
>>
>> --
>> Linux-cluster mailing list
>> Linux-cluster@xxxxxxxxxx
>> https://www.redhat.com/mailman/listinfo/linux-cluster
>>
>
>
>
> --
> esta es mi vida e me la vivo hasta que dios quiera
> --
> Linux-cluster mailing list
> Linux-cluster@xxxxxxxxxx
> https://www.redhat.com/mailman/listinfo/linux-cluster


-- 
Linux-cluster mailing list
Linux-cluster@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/linux-cluster




[Index of Archives]     [Corosync Cluster Engine]     [GFS]     [Linux Virtualization]     [Centos Virtualization]     [Centos]     [Linux RAID]     [Fedora Users]     [Fedora SELinux]     [Big List of Linux Books]     [Yosemite Camping]

  Powered by Linux