Hi Jan,
I finally traced the problem. Below details for future users.
I must use address from range 224.0.0.0/24 because my corosync instances are installed on cisco UCS blades which are connected to Interconnect that has a quite old firmware which not support to switch off igmp spoofing (it is enabled by default). For 224.0.0.0/24 igmp spoofing doesn't apply thats why i using this range.
But when it comes about problem with multicast, it shows up that network adapter (Palo) which is installend in blades filters multicast traffic for range 224.0.0.0/24 (with small exception 224.0.0.251 - mDNS). This cause that multicast traffic for 224.0.0.0/24 doesn't reach destination.
There is woraround, I had to switch adapter in promiscouse mode (I found this out accidentally when I've run tcpdump on both nodes simultaneously ) ). Since I turned on promiscouse mode on both interfaces (ip link set eth0 promisc on) I do not observe any problem whith corosync.
After I discovered this trick with promiscouse mode I found topic on cisco support forum that confirm problem I described:
https://supportforums.cisco.com/thread/2059897
https://supportforums.cisco.com/thread/2059897
So, to sum up....problem solved.
Many thanks Jan for help.
Regards
Andrew
2013/8/26 Jan Friesse <jfriesse@xxxxxxxxxx>
Andrew,
please really try omping, ideally with same address/port you wish to use
for corosync. Omping will test not only multicast, but also unicast traffic.
Also make sure not to use 224.0.0.0/24. These are reserved. As comment
in corosync.conf suggests, use 239.255.x.x.
Regards,
Honza
and k napsal(a):
> Hi Jan,
>
> 2013/8/23 Jan Friesse <jfriesse@xxxxxxxxxx>
>
>> Andrew,
>> please make really sure that mcast works. You can use for example omping
>> for that
>>
>
> I used typical ping and tcpdump:
>
> root@nodeA:~# ping 224.0.0.187
> PING 224.0.0.187 (224.0.0.187) 56(84) bytes of data.
> ^C
> --- 224.0.0.187 ping statistics ---
> 5 packets transmitted, 0 received, 100% packet loss, time 4031ms
>
>
> root@nodeB:~# tcpdump -n -i eth0 host 224.0.0.187
> tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
> listening on eth0, link-type EN10MB (Ethernet), capture size 65535 bytes
> 11:48:52.665093 IP 10.40.129.209 > 224.0.0.187: ICMP echo request, id 7476,
> seq 1, length 64
> 11:48:53.672241 IP 10.40.129.209 > 224.0.0.187: ICMP echo request, id 7476,
> seq 2, length 64
> 11:48:54.680228 IP 10.40.129.209 > 224.0.0.187: ICMP echo request, id 7476,
> seq 3, length 64
> 11:48:55.688198 IP 10.40.129.209 > 224.0.0.187: ICMP echo request, id 7476,
> seq 4, length 64
> 11:48:56.696178 IP 10.40.129.209 > 224.0.0.187: ICMP echo request, id 7476,
> seq 5, length 64
>
> trafic is visible, as well as in oposite direction
>
> root@nodeB:~# ping 224.0.0.187
> PING 224.0.0.187 (224.0.0.187) 56(84) bytes of data.
> ^C
> --- 224.0.0.187 ping statistics ---
> 4 packets transmitted, 0 received, 100% packet loss, time 3025ms
>
> root@nodeA:~# tcpdump -n -i eth0 host 224.0.0.187
> tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
> listening on eth0, link-type EN10MB (Ethernet), capture size 65535 bytes
> 11:12:22.493138 IP 10.40.129.210 > 224.0.0.187: ICMP echo request, id 6855,
> seq 1, length 64
> 11:12:23.502201 IP 10.40.129.210 > 224.0.0.187: ICMP echo request, id 6855,
> seq 2, length 64
> 11:12:24.510161 IP 10.40.129.210 > 224.0.0.187: ICMP echo request, id 6855,
> seq 3, length 64
> 11:12:25.518215 IP 10.40.129.210 > 224.0.0.187: ICMP echo request, id 6855,
> seq 4, length 64
>
>
>
>
>> What seems to be strange is especially node A. If you compare your node
>> A output:
>>
>>> 10:33:57.364852 IP 10.40.129.209.5464 > 224.0.0.187.5465: UDP, length 82
>>> 10:33:57.599081 IP 10.40.129.209.5464 > 224.0.0.187.5465: UDP, length 82
>>
>> with node B output:
>>
>>> 10:33:57.612370 IP 10.40.129.210.5464 > 224.0.0.187.5465: UDP, length 112
>>> 10:33:57.612473 IP 10.40.129.209.5464 > 224.0.0.187.5465: UDP, length 156
>>
>> you can see node B sees both 10.40.129.210.5464 and 10.40.129.209.5464,
>> but node A sees only 10.40.129.209.5464
>>
>
> Sorry, I didn't write it cleary, all output from tcpdump on previous mail
> is from nodeA, the first 9 packets with length 82 are captured before I
> started corosync on nodeB, and rest of this packet is captured when both
> instances of corosync are up.
>
> as you can see when I started corosync instance on nodeB, I captured packet
> from nodeB:
>
> 10:33:57.612370 IP 10.40.129.210.5464 > 224.0.0.187.5465: UDP, length 112
>
> on nodeA, so multicast traffic is working well (i think).
>
>
>> You can also try to turn on debugging (logging debug: on in config file)
>> and see logs.
>>
>>
> I turned on debug option, and done test:
>
> 1)started corosync on nodeA
> 2)clear logs on nodeA
> 3) started corosync on nodeB
>
> There is no logs on nodeA since start NodeB(strange), there should be
> something if corosync on nodeA received pacekt from nodeB
> below is log from nodeB,
>
> I cant see anything strange in that logs, below logs from nodeB
>
> Aug 23 11:58:15 nodeB corosync[6874]: [MAIN ] Corosync Cluster Engine
> ('1.4.2'): started and ready to provide service.
> Aug 23 11:58:15 nodeB corosync[6874]: [MAIN ] Corosync built-in
> features: nss
> Aug 23 11:58:15 nodeB corosync[6874]: [MAIN ] Successfully read main
> configuration file '/etc/corosync/corosync.conf'.
> Aug 23 11:58:15 nodeB corosync[6874]: [TOTEM ] Token Timeout (3000 ms)
> retransmit timeout (294 ms)
> Aug 23 11:58:15 nodeB corosync[6874]: [TOTEM ] token hold (225 ms)
> retransmits before loss (10 retrans)
> Aug 23 11:58:15 nodeB corosync[6874]: [TOTEM ] join (60 ms) send_join (0
> ms) consensus (3600 ms) merge (200 ms)
> Aug 23 11:58:15 nodeB corosync[6874]: [TOTEM ] downcheck (1000 ms) fail
> to recv const (2500 msgs)
> Aug 23 11:58:15 nodeB corosync[6874]: [TOTEM ] seqno unchanged const (30
> rotations) Maximum network MTU 1402
> Aug 23 11:58:15 nodeB corosync[6874]: [TOTEM ] window size per rotation
> (50 messages) maximum messages per rotation (20 messages)
> Aug 23 11:58:15 nodeB corosync[6874]: [TOTEM ] missed count const (5
> messages)
> Aug 23 11:58:15 nodeB corosync[6874]: [TOTEM ] send threads (0 threads)
> Aug 23 11:58:15 nodeB corosync[6874]: [TOTEM ] RRP token expired timeout
> (294 ms)
> Aug 23 11:58:15 nodeB corosync[6874]: [TOTEM ] RRP token problem counter
> (2000 ms)
> Aug 23 11:58:15 nodeB corosync[6874]: [TOTEM ] RRP threshold (10 problem
> count)
> Aug 23 11:58:15 nodeB corosync[6874]: [TOTEM ] RRP multicast threshold
> (100 problem count)
> Aug 23 11:58:15 nodeB corosync[6874]: [TOTEM ] RRP automatic recovery
> check timeout (1000 ms)
> Aug 23 11:58:15 nodeB corosync[6874]: [TOTEM ] RRP mode set to none.
> Aug 23 11:58:15 nodeB corosync[6874]: [TOTEM ] heartbeat_failures_allowed
> (0)
> Aug 23 11:58:15 nodeB corosync[6874]: [TOTEM ] max_network_delay (50 ms)
> Aug 23 11:58:15 nodeB corosync[6874]: [TOTEM ] HeartBeat is Disabled. To
> enable set heartbeat_failures_allowed > 0
> Aug 23 11:58:15 nodeB corosync[6874]: [TOTEM ] Initializing transport
> (UDP/IP Multicast).
> Aug 23 11:58:15 nodeB corosync[6874]: [TOTEM ] Initializing
> transmit/receive security: libtomcrypt SOBER128/SHA1HMAC (mode 0).
> Aug 23 11:58:15 nodeB corosync[6874]: [IPC ] you are using ipc api v2
> Aug 23 11:58:15 nodeB corosync[6874]: [TOTEM ] Receive multicast socket
> recv buffer size (262142 bytes).
> Aug 23 11:58:15 nodeB corosync[6874]: [TOTEM ] Transmit multicast socket
> send buffer size (262142 bytes).
> Aug 23 11:58:15 nodeB corosync[6874]: [TOTEM ] The network interface
> [10.40.129.210] is now up.
> Aug 23 11:58:15 nodeB corosync[6874]: [TOTEM ] Created or loaded sequence
> id a4.10.40.129.210 for this ring.
> Aug 23 11:58:15 nodeB corosync[6874]: [pcmk ] debug: pcmk_user_lookup:
> Cluster user root has uid=0 gid=0
> Aug 23 11:58:15 nodeB corosync[6874]: [pcmk ] info: process_ais_conf:
> Reading configure
> Aug 23 11:58:15 nodeB corosync[6874]: [pcmk ] info: config_find_init:
> Local handle: 4730966301143465987 for logging
> Aug 23 11:58:15 nodeB corosync[6874]: [pcmk ] info: config_find_next:
> Processing additional logging options...
> Aug 23 11:58:15 nodeB corosync[6874]: [pcmk ] info: get_config_opt:
> Found 'on' for option: debug
> Aug 23 11:58:15 nodeB corosync[6874]: [pcmk ] info: get_config_opt:
> Found 'no' for option: to_logfile
> Aug 23 11:58:15 nodeB corosync[6874]: [pcmk ] info: get_config_opt:
> Found 'yes' for option: to_syslog
> Aug 23 11:58:15 nodeB corosync[6874]: [pcmk ] info: get_config_opt:
> Found 'daemon' for option: syslog_facility
> Aug 23 11:58:15 nodeB cib: [6882]: info: crm_log_init_worker: Changed
> active directory to /var/lib/heartbeat/cores/hacluster
> Aug 23 11:58:15 nodeB cib: [6882]: info: retrieveCib: Reading cluster
> configuration from: /var/lib/heartbeat/crm/cib.xml (digest:
> /var/lib/heartbeat/crm/cib.xml.sig)
> Aug 23 11:58:15 nodeB cib: [6882]: debug: readCibXmlFile: [on-disk] <cib
> epoch="5" num_updates="0" admin_epoch="0" validate-with="pacemaker-1.2"
> cib-last-written="Fri Aug 23 11:23:56 2013" crm_feature_set="3.0.6"
> update-origin="nodeA" update-client="crmd" have-quorum="1" >
> Aug 23 11:58:15 nodeB cib: [6882]: debug: readCibXmlFile: [on-disk]
> <configuration >
> Aug 23 11:58:15 nodeB cib: [6882]: debug: readCibXmlFile: [on-disk]
> <crm_config >
> Aug 23 11:58:15 nodeB cib: [6882]: debug: readCibXmlFile: [on-disk]
> <cluster_property_set id="cib-bootstrap-options" >
> Aug 23 11:58:15 nodeB cib: [6882]: debug: readCibXmlFile: [on-disk]
> <nvpair id="cib-bootstrap-options-dc-version" name="dc-version"
> value="1.1.7-ee0730e13d124c3d58f00016c3376a1de5323cff" />
> Aug 23 11:58:15 nodeB cib: [6882]: debug: readCibXmlFile: [on-disk]
> <nvpair id="cib-bootstrap-options-cluster-infrastructure"
> name="cluster-infrastructure" value="openais" />
> Aug 23 11:58:15 nodeB cib: [6882]: debug: readCibXmlFile: [on-disk]
> <nvpair id="cib-bootstrap-options-expected-quorum-votes"
> name="expected-quorum-votes" value="2" />
> Aug 23 11:58:15 nodeB cib: [6882]: debug: readCibXmlFile: [on-disk]
> </cluster_property_set>
> Aug 23 11:58:15 nodeB cib: [6882]: debug: readCibXmlFile: [on-disk]
> </crm_config>
> Aug 23 11:58:15 nodeB cib: [6882]: debug: readCibXmlFile: [on-disk]
> <nodes >
> Aug 23 11:58:15 nodeB cib: [6882]: debug: readCibXmlFile: [on-disk]
> <node id="nodeA" uname="nodeA" type="normal" />
> Aug 23 11:58:15 nodeB cib: [6882]: debug: readCibXmlFile: [on-disk]
> <node id="nodeB" uname="nodeB" type="normal" />
> Aug 23 11:58:15 nodeB cib: [6882]: debug: readCibXmlFile: [on-disk]
> </nodes>
> Aug 23 11:58:15 nodeB cib: [6882]: debug: readCibXmlFile: [on-disk]
> <resources />
> Aug 23 11:58:15 nodeB cib: [6882]: debug: readCibXmlFile: [on-disk]
> <constraints />
> Aug 23 11:58:15 nodeB cib: [6882]: debug: readCibXmlFile: [on-disk]
> </configuration>
> Aug 23 11:58:15 nodeB cib: [6882]: debug: readCibXmlFile: [on-disk]
> <status />
> Aug 23 11:58:15 nodeB cib: [6882]: debug: readCibXmlFile: [on-disk] </cib>
> Aug 23 11:58:15 nodeB cib: [6882]: info: validate_with_relaxng: Creating
> RNG parser context
> Aug 23 11:58:15 nodeB cib: [6882]: debug: activateCibXml: Triggering CIB
> write for start op
> Aug 23 11:58:15 nodeB cib: [6882]: info: startCib: CIB Initialization
> completed successfully
> Aug 23 11:58:15 nodeB cib: [6882]: info: get_cluster_type: Cluster type is:
> 'openais'
> Aug 23 11:58:15 nodeB cib: [6882]: notice: crm_cluster_connect: Connecting
> to cluster infrastructure: classic openais (with plugin)
> Aug 23 11:58:15 nodeB cib: [6882]: info: init_ais_connection_classic:
> Creating connection to our Corosync plugin
> Aug 23 11:58:15 nodeB stonith-ng: [6883]: info: Invoked:
> /usr/lib/pacemaker/stonithd
> Aug 23 11:58:15 nodeB stonith-ng: [6883]: info: crm_log_init_worker:
> Changed active directory to /var/lib/heartbeat/cores/root
> Aug 23 11:58:15 nodeB stonith-ng: [6883]: info: get_cluster_type: Cluster
> type is: 'openais'
> Aug 23 11:58:15 nodeB stonith-ng: [6883]: notice: crm_cluster_connect:
> Connecting to cluster infrastructure: classic openais (with plugin)
> Aug 23 11:58:15 nodeB stonith-ng: [6883]: info:
> init_ais_connection_classic: Creating connection to our Corosync plugin
> Aug 23 11:58:15 nodeB lrmd: [6884]: info: enabling coredumps
> Aug 23 11:58:15 nodeB lrmd: [6884]: WARN: Core dumps could be lost if
> multiple dumps occur.
> Aug 23 11:58:15 nodeB lrmd: [6884]: WARN: Consider setting non-default
> value in /proc/sys/kernel/core_pattern (or equivalent) for maximum
> supportability
> Aug 23 11:58:15 nodeB lrmd: [6884]: WARN: Consider setting
> /proc/sys/kernel/core_uses_pid (or equivalent) to 1 for maximum
> supportability
> Aug 23 11:58:15 nodeB lrmd: [6884]: debug: main: run the loop...
> Aug 23 11:58:15 nodeB lrmd: [6884]: info: Started.
> Aug 23 11:58:15 nodeB attrd: [6885]: info: Invoked: /usr/lib/pacemaker/attrd
> Aug 23 11:58:15 nodeB attrd: [6885]: info: crm_log_init_worker: Changed
> active directory to /var/lib/heartbeat/cores/hacluster
> Aug 23 11:58:15 nodeB pengine: [6886]: info: Invoked:
> /usr/lib/pacemaker/pengine
> Aug 23 11:58:15 nodeB attrd: [6885]: info: main: Starting up
> Aug 23 11:58:15 nodeB attrd: [6885]: info: get_cluster_type: Cluster type
> is: 'openais'
> Aug 23 11:58:15 nodeB attrd: [6885]: notice: crm_cluster_connect:
> Connecting to cluster infrastructure: classic openais (with plugin)
> Aug 23 11:58:15 nodeB attrd: [6885]: info: init_ais_connection_classic:
> Creating connection to our Corosync plugin
> Aug 23 11:58:15 nodeB pengine: [6886]: info: crm_log_init_worker: Changed
> active directory to /var/lib/heartbeat/cores/hacluster
> Aug 23 11:58:15 nodeB pengine: [6886]: debug: main: Checking for old
> instances of pengine
> Aug 23 11:58:15 nodeB pengine: [6886]: debug:
> init_client_ipc_comms_nodispatch: Attempting to talk on:
> /var/run/crm/pengine
> Aug 23 11:58:15 nodeB pengine: [6886]: debug:
> init_client_ipc_comms_nodispatch: Could not init comms on:
> /var/run/crm/pengine
> Aug 23 11:58:15 nodeB pengine: [6886]: debug: main: Init server comms
> Aug 23 11:58:15 nodeB pengine: [6886]: info: main: Starting pengine
> Aug 23 11:58:15 nodeB corosync[6874]: [pcmk ] info: config_find_init:
> Local handle: 7739444317642555396 for quorum
> Aug 23 11:58:15 nodeB corosync[6874]: [pcmk ] info: config_find_next: No
> additional configuration supplied for: quorum
> Aug 23 11:58:15 nodeB corosync[6874]: [pcmk ] info: get_config_opt: No
> default for option: provider
> Aug 23 11:58:15 nodeB corosync[6874]: [pcmk ] info: config_find_init:
> Local handle: 5650605097994944517 for service
> Aug 23 11:58:15 nodeB corosync[6874]: [pcmk ] info: config_find_next:
> Processing additional service options...
> Aug 23 11:58:15 nodeB corosync[6874]: [pcmk ] info: get_config_opt:
> Found '0' for option: ver
> Aug 23 11:58:15 nodeB corosync[6874]: [pcmk ] info: get_config_opt:
> Defaulting to 'pcmk' for option: clustername
> Aug 23 11:58:15 nodeB corosync[6874]: [pcmk ] info: get_config_opt:
> Defaulting to 'no' for option: use_logd
> Aug 23 11:58:15 nodeB corosync[6874]: [pcmk ] info: get_config_opt:
> Defaulting to 'no' for option: use_mgmtd
> Aug 23 11:58:15 nodeB corosync[6874]: [pcmk ] info: pcmk_startup: CRM:
> Initialized
> Aug 23 11:58:15 nodeB corosync[6874]: [pcmk ] Logging: Initialized
> pcmk_startup
> Aug 23 11:58:15 nodeB corosync[6874]: [pcmk ] info: pcmk_startup:
> Maximum core file size is: 18446744073709551615
> Aug 23 11:58:15 nodeB corosync[6874]: [pcmk ] debug: pcmk_user_lookup:
> Cluster user hacluster has uid=106 gid=108
> Aug 23 11:58:15 nodeB corosync[6874]: [pcmk ] info: pcmk_startup:
> Service: 9
> Aug 23 11:58:15 nodeB corosync[6874]: [pcmk ] info: pcmk_startup: Local
> hostname: nodeB
> Aug 23 11:58:15 nodeB corosync[6874]: [pcmk ] info: pcmk_update_nodeid:
> Local node id: 1384196106
> Aug 23 11:58:15 nodeB corosync[6874]: [pcmk ] info: update_member:
> Creating entry for node 1384196106 born on 0
> Aug 23 11:58:15 nodeB corosync[6874]: [pcmk ] info: update_member:
> 0x1f7fdc0 Node 1384196106 now known as nodeB (was: (null))
> Aug 23 11:58:15 nodeB corosync[6874]: [pcmk ] info: update_member: Node
> nodeB now has 1 quorum votes (was 0)
> Aug 23 11:58:15 nodeB corosync[6874]: [pcmk ] info: update_member: Node
> 1384196106/nodeB is now: member
> Aug 23 11:58:15 nodeB corosync[6874]: [pcmk ] debug: pcmk_user_lookup:
> Cluster user hacluster has uid=106 gid=108
> Aug 23 11:58:15 nodeB corosync[6874]: [pcmk ] info: spawn_child: Forked
> child 6882 for process cib
> Aug 23 11:58:15 nodeB corosync[6874]: [pcmk ] info: spawn_child: Forked
> child 6883 for process stonith-ng
> Aug 23 11:58:15 nodeB corosync[6874]: [pcmk ] info: spawn_child: Forked
> child 6884 for process lrmd
> Aug 23 11:58:15 nodeB corosync[6874]: [pcmk ] debug: pcmk_user_lookup:
> Cluster user hacluster has uid=106 gid=108
> Aug 23 11:58:15 nodeB corosync[6874]: [pcmk ] info: spawn_child: Forked
> child 6885 for process attrd
> Aug 23 11:58:15 nodeB corosync[6874]: [pcmk ] debug: pcmk_user_lookup:
> Cluster user hacluster has uid=106 gid=108
> Aug 23 11:58:15 nodeB corosync[6874]: [pcmk ] info: spawn_child: Forked
> child 6886 for process pengine
> Aug 23 11:58:15 nodeB corosync[6874]: [pcmk ] debug: pcmk_user_lookup:
> Cluster user hacluster has uid=106 gid=108
> Aug 23 11:58:15 nodeB corosync[6874]: [pcmk ] debug: pcmk_user_lookup:
> Cluster user hacluster has uid=106 gid=108
> Aug 23 11:58:15 nodeB corosync[6874]: [pcmk ] info: spawn_child: Forked
> child 6887 for process crmd
> Aug 23 11:58:15 nodeB corosync[6874]: [SERV ] Service engine loaded:
> Pacemaker Cluster Manager 1.1.7
> Aug 23 11:58:15 nodeB corosync[6874]: [SERV ] Service engine loaded:
> corosync extended virtual synchrony service
> Aug 23 11:58:15 nodeB corosync[6874]: [SERV ] Service engine loaded:
> corosync configuration service
> Aug 23 11:58:15 nodeB corosync[6874]: [SERV ] Service engine loaded:
> corosync cluster closed process group service v1.01
> Aug 23 11:58:15 nodeB corosync[6874]: [SERV ] Service engine loaded:
> corosync cluster config database access v1.01
> Aug 23 11:58:15 nodeB corosync[6874]: [SERV ] Service engine loaded:
> corosync profile loading service
> Aug 23 11:58:15 nodeB corosync[6874]: [SERV ] Service engine loaded:
> corosync cluster quorum service v0.1
> Aug 23 11:58:15 nodeB corosync[6874]: [MAIN ] Compatibility mode set to
> whitetank. Using V1 and V2 of the synchronization engine.
> Aug 23 11:58:15 nodeB corosync[6874]: [TOTEM ] entering GATHER state from
> 15.
> Aug 23 11:58:15 nodeB corosync[6874]: [TOTEM ] Creating commit token
> because I am the rep.
> Aug 23 11:58:15 nodeB corosync[6874]: [TOTEM ] Saving state aru 0 high
> seq received 0
> Aug 23 11:58:15 nodeB corosync[6874]: [TOTEM ] Storing new sequence id
> for ring a8
> Aug 23 11:58:15 nodeB corosync[6874]: [TOTEM ] entering COMMIT state.
> Aug 23 11:58:15 nodeB corosync[6874]: [TOTEM ] got commit token
> Aug 23 11:58:15 nodeB corosync[6874]: [TOTEM ] entering RECOVERY state.
> Aug 23 11:58:15 nodeB corosync[6874]: [TOTEM ] position [0] member
> 10.40.129.210:
> Aug 23 11:58:15 nodeB corosync[6874]: [TOTEM ] previous ring seq a4 rep
> 10.40.129.210
> Aug 23 11:58:15 nodeB corosync[6874]: [TOTEM ] aru 0 high delivered 0
> received flag 1
> Aug 23 11:58:15 nodeB corosync[6874]: [TOTEM ] Did not need to originate
> any messages in recovery.
> Aug 23 11:58:15 nodeB corosync[6874]: [TOTEM ] got commit token
> Aug 23 11:58:15 nodeB corosync[6874]: [TOTEM ] Sending initial ORF token
> Aug 23 11:58:15 nodeB corosync[6874]: [TOTEM ] token retrans flag is 0 my
> set retrans flag0 retrans queue empty 1 count 0, aru 0
> Aug 23 11:58:15 nodeB corosync[6874]: [TOTEM ] install seq 0 aru 0 high
> seq received 0
> Aug 23 11:58:15 nodeB corosync[6874]: [TOTEM ] token retrans flag is 0 my
> set retrans flag0 retrans queue empty 1 count 1, aru 0
> Aug 23 11:58:15 nodeB corosync[6874]: [TOTEM ] install seq 0 aru 0 high
> seq received 0
> Aug 23 11:58:15 nodeB corosync[6874]: [TOTEM ] token retrans flag is 0 my
> set retrans flag0 retrans queue empty 1 count 2, aru 0
> Aug 23 11:58:15 nodeB corosync[6874]: [TOTEM ] install seq 0 aru 0 high
> seq received 0
> Aug 23 11:58:15 nodeB corosync[6874]: [TOTEM ] token retrans flag is 0 my
> set retrans flag0 retrans queue empty 1 count 3, aru 0
> Aug 23 11:58:15 nodeB corosync[6874]: [TOTEM ] install seq 0 aru 0 high
> seq received 0
> Aug 23 11:58:15 nodeB corosync[6874]: [TOTEM ] retrans flag count 4 token
> aru 0 install seq 0 aru 0 0
> Aug 23 11:58:15 nodeB corosync[6874]: [TOTEM ] Resetting old ring state
> Aug 23 11:58:15 nodeB corosync[6874]: [TOTEM ] recovery to regular 1-0
> Aug 23 11:58:15 nodeB corosync[6874]: [TOTEM ] Delivering to app 1 to 0
> Aug 23 11:58:15 nodeB corosync[6874]: [pcmk ] notice: pcmk_peer_update:
> Transitional membership event on ring 168: memb=0, new=0, lost=0
> Aug 23 11:58:15 nodeB corosync[6874]: [pcmk ] notice: pcmk_peer_update:
> Stable membership event on ring 168: memb=1, new=1, lost=0
> Aug 23 11:58:15 nodeB corosync[6874]: [pcmk ] info: pcmk_peer_update:
> NEW: nodeB 1384196106
> Aug 23 11:58:15 nodeB corosync[6874]: [pcmk ] debug: pcmk_peer_update:
> Node 1384196106 has address r(0) ip(10.40.129.210)
> Aug 23 11:58:15 nodeB corosync[6874]: [pcmk ] info: pcmk_peer_update:
> MEMB: nodeB 1384196106
> Aug 23 11:58:15 nodeB corosync[6874]: [pcmk ] debug: send_cluster_id:
> Leaving born-on unset: 168
> Aug 23 11:58:15 nodeB corosync[6874]: [pcmk ] debug: send_cluster_id:
> Local update: id=1384196106, born=0, seq=168
> Aug 23 11:58:15 nodeB corosync[6874]: [pcmk ] info: update_member: Node
> nodeB now has process list: 00000000000000000000000000111312 (1118994)
> Aug 23 11:58:15 nodeB corosync[6874]: [SYNC ] This node is within the
> primary component and will provide service.
> Aug 23 11:58:15 nodeB corosync[6874]: [TOTEM ] entering OPERATIONAL state.
> Aug 23 11:58:15 nodeB corosync[6874]: [TOTEM ] A processor joined or left
> the membership and a new membership was formed.
> Aug 23 11:58:15 nodeB corosync[6874]: [TOTEM ] mcasted message added to
> pending queue
> Aug 23 11:58:15 nodeB corosync[6874]: [TOTEM ] Delivering 0 to 1
> Aug 23 11:58:15 nodeB corosync[6874]: [TOTEM ] Delivering MCAST message
> with seq 1 to pending delivery queue
> Aug 23 11:58:15 nodeB corosync[6874]: [pcmk ] debug:
> pcmk_cluster_id_callback: Node update: nodeB (1.1.7)
> Aug 23 11:58:15 nodeB corosync[6874]: [SYNC ] confchg entries 1
> Aug 23 11:58:15 nodeB corosync[6874]: [SYNC ] Barrier Start Received
> From 1384196106
> Aug 23 11:58:15 nodeB corosync[6874]: [SYNC ] Barrier completion status
> for nodeid 1384196106 = 1.
> Aug 23 11:58:15 nodeB corosync[6874]: [SYNC ] Synchronization barrier
> completed
> Aug 23 11:58:15 nodeB corosync[6874]: [SYNC ] Synchronization actions
> starting for (dummy CLM service)
> Aug 23 11:58:15 nodeB corosync[6874]: [TOTEM ] mcasted message added to
> pending queue
> Aug 23 11:58:15 nodeB corosync[6874]: [TOTEM ] Delivering 1 to 2
> Aug 23 11:58:15 nodeB corosync[6874]: [TOTEM ] Delivering MCAST message
> with seq 2 to pending delivery queue
> Aug 23 11:58:15 nodeB corosync[6874]: [TOTEM ] mcasted message added to
> pending queue
> Aug 23 11:58:15 nodeB corosync[6874]: [TOTEM ] releasing messages up to
> and including 1
> Aug 23 11:58:15 nodeB corosync[6874]: [TOTEM ] Delivering 2 to 3
> Aug 23 11:58:15 nodeB corosync[6874]: [TOTEM ] Delivering MCAST message
> with seq 3 to pending delivery queue
> Aug 23 11:58:15 nodeB corosync[6874]: [SYNC ] confchg entries 1
> Aug 23 11:58:15 nodeB corosync[6874]: [SYNC ] Barrier Start Received
> From 1384196106
> Aug 23 11:58:15 nodeB corosync[6874]: [SYNC ] Barrier completion status
> for nodeid 1384196106 = 1.
> Aug 23 11:58:15 nodeB corosync[6874]: [SYNC ] Synchronization barrier
> completed
> Aug 23 11:58:15 nodeB corosync[6874]: [SYNC ] Committing synchronization
> for (dummy CLM service)
> Aug 23 11:58:15 nodeB corosync[6874]: [SYNC ] Synchronization actions
> starting for (dummy AMF service)
> Aug 23 11:58:15 nodeB corosync[6874]: [TOTEM ] releasing messages up to
> and including 2
> Aug 23 11:58:15 nodeB corosync[6874]: [TOTEM ] mcasted message added to
> pending queue
> Aug 23 11:58:15 nodeB corosync[6874]: [TOTEM ] releasing messages up to
> and including 3
> Aug 23 11:58:15 nodeB corosync[6874]: [TOTEM ] Delivering 3 to 4
> Aug 23 11:58:15 nodeB corosync[6874]: [TOTEM ] Delivering MCAST message
> with seq 4 to pending delivery queue
> Aug 23 11:58:15 nodeB corosync[6874]: [SYNC ] confchg entries 1
> Aug 23 11:58:15 nodeB corosync[6874]: [SYNC ] Barrier Start Received
> From 1384196106
> Aug 23 11:58:15 nodeB corosync[6874]: [SYNC ] Barrier completion status
> for nodeid 1384196106 = 1.
> Aug 23 11:58:15 nodeB corosync[6874]: [SYNC ] Synchronization barrier
> completed
> Aug 23 11:58:15 nodeB corosync[6874]: [SYNC ] Committing synchronization
> for (dummy AMF service)
> Aug 23 11:58:15 nodeB corosync[6874]: [SYNC ] Synchronization actions
> starting for (dummy CKPT service)
> Aug 23 11:58:15 nodeB corosync[6874]: [TOTEM ] mcasted message added to
> pending queue
> Aug 23 11:58:15 nodeB corosync[6874]: [TOTEM ] releasing messages up to
> and including 4
> Aug 23 11:58:15 nodeB corosync[6874]: [TOTEM ] Delivering 4 to 5
> Aug 23 11:58:15 nodeB corosync[6874]: [TOTEM ] Delivering MCAST message
> with seq 5 to pending delivery queue
> Aug 23 11:58:15 nodeB corosync[6874]: [SYNC ] confchg entries 1
> Aug 23 11:58:15 nodeB corosync[6874]: [SYNC ] Barrier Start Received
> From 1384196106
> Aug 23 11:58:15 nodeB corosync[6874]: [SYNC ] Barrier completion status
> for nodeid 1384196106 = 1.
> Aug 23 11:58:15 nodeB corosync[6874]: [SYNC ] Synchronization barrier
> completed
> Aug 23 11:58:15 nodeB corosync[6874]: [SYNC ] Committing synchronization
> for (dummy CKPT service)
> Aug 23 11:58:15 nodeB corosync[6874]: [SYNC ] Synchronization actions
> starting for (dummy EVT service)
> Aug 23 11:58:15 nodeB corosync[6874]: [TOTEM ] mcasted message added to
> pending queue
> Aug 23 11:58:15 nodeB corosync[6874]: [TOTEM ] Delivering 5 to 6
> Aug 23 11:58:15 nodeB corosync[6874]: [TOTEM ] Delivering MCAST message
> with seq 6 to pending delivery queue
> Aug 23 11:58:15 nodeB corosync[6874]: [TOTEM ] mcasted message added to
> pending queue
> Aug 23 11:58:15 nodeB corosync[6874]: [TOTEM ] releasing messages up to
> and including 5
> Aug 23 11:58:15 nodeB corosync[6874]: [TOTEM ] Delivering 6 to 7
> Aug 23 11:58:15 nodeB corosync[6874]: [TOTEM ] Delivering MCAST message
> with seq 7 to pending delivery queue
> Aug 23 11:58:15 nodeB corosync[6874]: [TOTEM ] releasing messages up to
> and including 6
> Aug 23 11:58:15 nodeB corosync[6874]: [TOTEM ] mcasted message added to
> pending queue
> Aug 23 11:58:15 nodeB corosync[6874]: [TOTEM ] releasing messages up to
> and including 7
> Aug 23 11:58:15 nodeB corosync[6874]: [TOTEM ] Delivering 7 to 8
> Aug 23 11:58:15 nodeB corosync[6874]: [TOTEM ] Delivering MCAST message
> with seq 8 to pending delivery queue
> Aug 23 11:58:15 nodeB corosync[6874]: [SYNC ] confchg entries 1
> Aug 23 11:58:15 nodeB corosync[6874]: [SYNC ] Barrier Start Received
> From 1384196106
> Aug 23 11:58:15 nodeB corosync[6874]: [SYNC ] Barrier completion status
> for nodeid 1384196106 = 1.
> Aug 23 11:58:15 nodeB corosync[6874]: [SYNC ] Synchronization barrier
> completed
> Aug 23 11:58:15 nodeB corosync[6874]: [SYNC ] Committing synchronization
> for (dummy EVT service)
> Aug 23 11:58:15 nodeB corosync[6874]: [SYNC ] Synchronization actions
> starting for (corosync cluster closed process group service v1.01)
> Aug 23 11:58:15 nodeB corosync[6874]: [TOTEM ] mcasted message added to
> pending queue
> Aug 23 11:58:15 nodeB corosync[6874]: [TOTEM ] mcasted message added to
> pending queue
> Aug 23 11:58:15 nodeB corosync[6874]: [TOTEM ] Delivering 8 to a
> Aug 23 11:58:15 nodeB corosync[6874]: [TOTEM ] Delivering MCAST message
> with seq 9 to pending delivery queue
> Aug 23 11:58:15 nodeB corosync[6874]: [TOTEM ] Delivering MCAST message
> with seq a to pending delivery queue
> Aug 23 11:58:15 nodeB corosync[6874]: [CPG ] comparing: sender r(0)
> ip(10.40.129.210) ; members(old:0 left:0)
> Aug 23 11:58:15 nodeB corosync[6874]: [CPG ] chosen downlist: sender
> r(0) ip(10.40.129.210) ; members(old:0 left:0)
> Aug 23 11:58:15 nodeB corosync[6874]: [TOTEM ] mcasted message added to
> pending queue
> Aug 23 11:58:15 nodeB corosync[6874]: [TOTEM ] releasing messages up to
> and including 8
> Aug 23 11:58:15 nodeB corosync[6874]: [TOTEM ] Delivering a to b
> Aug 23 11:58:15 nodeB corosync[6874]: [TOTEM ] Delivering MCAST message
> with seq b to pending delivery queue
> Aug 23 11:58:15 nodeB corosync[6874]: [SYNC ] confchg entries 1
> Aug 23 11:58:15 nodeB corosync[6874]: [SYNC ] Barrier Start Received
> From 1384196106
> Aug 23 11:58:15 nodeB corosync[6874]: [SYNC ] Barrier completion status
> for nodeid 1384196106 = 1.
> Aug 23 11:58:15 nodeB corosync[6874]: [SYNC ] Synchronization barrier
> completed
> Aug 23 11:58:15 nodeB corosync[6874]: [SYNC ] Committing synchronization
> for (corosync cluster closed process group service v1.01)
> Aug 23 11:58:15 nodeB corosync[6874]: [TOTEM ] mcasted message added to
> pending queue
> Aug 23 11:58:15 nodeB corosync[6874]: [TOTEM ] releasing messages up to
> and including a
> Aug 23 11:58:15 nodeB corosync[6874]: [TOTEM ] Delivering b to c
> Aug 23 11:58:15 nodeB corosync[6874]: [TOTEM ] Delivering MCAST message
> with seq c to pending delivery queue
> Aug 23 11:58:15 nodeB corosync[6874]: [MAIN ] Completed service
> synchronization, ready to provide service.
> Aug 23 11:58:15 nodeB corosync[6874]: [TOTEM ] releasing messages up to
> and including b
> Aug 23 11:58:15 nodeB corosync[6874]: [TOTEM ] releasing messages up to
> and including c
> Aug 23 11:58:15 nodeB stonith-ng: [6883]: debug:
> init_ais_connection_classic: Adding fd=5 to mainloop
> Aug 23 11:58:15 nodeB stonith-ng: [6883]: info:
> init_ais_connection_classic: AIS connection established
> Aug 23 11:58:15 nodeB attrd: [6885]: debug: init_ais_connection_classic:
> Adding fd=5 to mainloop
> Aug 23 11:58:15 nodeB attrd: [6885]: info: init_ais_connection_classic: AIS
> connection established
> Aug 23 11:58:15 nodeB corosync[6874]: [pcmk ] info: pcmk_ipc: Recorded
> connection 0x1f90170 for stonith-ng/6883
> Aug 23 11:58:15 nodeB corosync[6874]: [pcmk ] debug:
> process_ais_message: Msg[0] (dest=local:ais, from=nodeB:stonith-ng.6883,
> remote=true, size=5): 6883
> Aug 23 11:58:15 nodeB corosync[6874]: [pcmk ] info: pcmk_ipc: Recorded
> connection 0x1f944d0 for attrd/6885
> Aug 23 11:58:15 nodeB corosync[6874]: [pcmk ] debug:
> process_ais_message: Msg[0] (dest=local:ais, from=nodeB:attrd.6885,
> remote=true, size=5): 6885
> Aug 23 11:58:15 nodeB stonith-ng: [6883]: info: get_ais_nodeid: Server
> details: id=1384196106 uname=nodeB cname=pcmk
> Aug 23 11:58:15 nodeB stonith-ng: [6883]: info: init_ais_connection_once:
> Connection to 'classic openais (with plugin)': established
> Aug 23 11:58:15 nodeB stonith-ng: [6883]: debug: crm_new_peer: Creating
> entry for node nodeB/1384196106
> Aug 23 11:58:15 nodeB stonith-ng: [6883]: info: crm_new_peer: Node nodeB
> now has id: 1384196106
> Aug 23 11:58:15 nodeB stonith-ng: [6883]: info: crm_new_peer: Node
> 1384196106 is now known as nodeB
> Aug 23 11:58:15 nodeB stonith-ng: [6883]: debug:
> init_client_ipc_comms_nodispatch: Attempting to talk on: /var/run/crm/cib_rw
> Aug 23 11:58:15 nodeB stonith-ng: [6883]: debug:
> init_client_ipc_comms_nodispatch: Could not init comms on:
> /var/run/crm/cib_rw
> Aug 23 11:58:15 nodeB stonith-ng: [6883]: debug: cib_native_signon_raw:
> Connection to command channel failed
> Aug 23 11:58:15 nodeB stonith-ng: [6883]: debug:
> init_client_ipc_comms_nodispatch: Attempting to talk on:
> /var/run/crm/cib_callback
> Aug 23 11:58:15 nodeB stonith-ng: [6883]: debug:
> init_client_ipc_comms_nodispatch: Could not init comms on:
> /var/run/crm/cib_callback
> Aug 23 11:58:15 nodeB stonith-ng: [6883]: debug: cib_native_signon_raw:
> Connection to callback channel failed
> Aug 23 11:58:15 nodeB stonith-ng: [6883]: debug: cib_native_signon_raw:
> Connection to CIB failed: connection failed
> Aug 23 11:58:15 nodeB stonith-ng: [6883]: debug: cib_native_signoff:
> Signing out of the CIB Service
> Aug 23 11:58:15 nodeB attrd: [6885]: info: get_ais_nodeid: Server details:
> id=1384196106 uname=nodeB cname=pcmk
> Aug 23 11:58:15 nodeB attrd: [6885]: info: init_ais_connection_once:
> Connection to 'classic openais (with plugin)': established
> Aug 23 11:58:15 nodeB attrd: [6885]: debug: crm_new_peer: Creating entry
> for node nodeB/1384196106
> Aug 23 11:58:15 nodeB attrd: [6885]: info: crm_new_peer: Node nodeB now has
> id: 1384196106
> Aug 23 11:58:15 nodeB attrd: [6885]: info: crm_new_peer: Node 1384196106 is
> now known as nodeB
> Aug 23 11:58:15 nodeB attrd: [6885]: info: main: Cluster connection active
> Aug 23 11:58:15 nodeB crmd: [6887]: info: Invoked: /usr/lib/pacemaker/crmd
> Aug 23 11:58:15 nodeB attrd: [6885]: info: main: Accepting attribute updates
> Aug 23 11:58:15 nodeB attrd: [6885]: notice: main: Starting mainloop...
> Aug 23 11:58:15 nodeB cib: [6882]: debug: init_ais_connection_classic:
> Adding fd=5 to mainloop
> Aug 23 11:58:15 nodeB cib: [6882]: info: init_ais_connection_classic: AIS
> connection established
> Aug 23 11:58:15 nodeB corosync[6874]: [pcmk ] info: pcmk_ipc: Recorded
> connection 0x1f8be10 for cib/6882
> Aug 23 11:58:15 nodeB corosync[6874]: [pcmk ] info: pcmk_ipc: Sending
> membership update 168 to cib
> Aug 23 11:58:15 nodeB corosync[6874]: [pcmk ] debug:
> process_ais_message: Msg[0] (dest=local:ais, from=nodeB:cib.6882,
> remote=true, size=5): 6882
> Aug 23 11:58:15 nodeB cib: [6882]: info: get_ais_nodeid: Server details:
> id=1384196106 uname=nodeB cname=pcmk
> Aug 23 11:58:15 nodeB cib: [6882]: info: init_ais_connection_once:
> Connection to 'classic openais (with plugin)': established
> Aug 23 11:58:15 nodeB cib: [6882]: debug: crm_new_peer: Creating entry for
> node nodeB/1384196106
> Aug 23 11:58:15 nodeB cib: [6882]: info: crm_new_peer: Node nodeB now has
> id: 1384196106
> Aug 23 11:58:15 nodeB cib: [6882]: info: crm_new_peer: Node 1384196106 is
> now known as nodeB
> Aug 23 11:58:15 nodeB crmd: [6887]: info: crm_log_init_worker: Changed
> active directory to /var/lib/heartbeat/cores/hacluster
> Aug 23 11:58:15 nodeB cib: [6882]: info: cib_init: Starting cib mainloop
> Aug 23 11:58:15 nodeB crmd: [6887]: notice: main: CRM Git Version:
> ee0730e13d124c3d58f00016c3376a1de5323cff
> Aug 23 11:58:15 nodeB cib: [6882]: info: ais_dispatch_message: Membership
> 168: quorum still lost
> Aug 23 11:58:15 nodeB cib: [6882]: info: crm_update_peer: Node nodeB:
> id=1384196106 state=member (new) addr=r(0) ip(10.40.129.210) (new) votes=1
> (new) born=0 seen=168 proc=00000000000000000000000000111312 (new)
> Aug 23 11:58:15 nodeB cib: [6882]: debug: Forking temp process
> write_cib_contents
> Aug 23 11:58:15 nodeB crmd: [6887]: debug: crmd_init: Starting crmd
> Aug 23 11:58:15 nodeB crmd: [6887]: debug: s_crmd_fsa: Processing
> I_STARTUP: [ state=S_STARTING cause=C_STARTUP origin=crmd_init ]
> Aug 23 11:58:15 nodeB crmd: [6887]: debug: do_fsa_action: actions:trace:
> #011// A_LOG
> Aug 23 11:58:15 nodeB crmd: [6887]: debug: do_log: FSA: Input I_STARTUP
> from crmd_init() received in state S_STARTING
> Aug 23 11:58:15 nodeB crmd: [6887]: debug: do_fsa_action: actions:trace:
> #011// A_STARTUP
> Aug 23 11:58:15 nodeB crmd: [6887]: debug: do_startup: Registering Signal
> Handlers
> Aug 23 11:58:15 nodeB crmd: [6887]: debug: do_startup: Creating CIB and LRM
> objects
> Aug 23 11:58:15 nodeB crmd: [6887]: debug: do_fsa_action: actions:trace:
> #011// A_CIB_START
> Aug 23 11:58:15 nodeB crmd: [6887]: debug:
> init_client_ipc_comms_nodispatch: Attempting to talk on: /var/run/crm/cib_rw
> Aug 23 11:58:15 nodeB crmd: [6887]: debug:
> init_client_ipc_comms_nodispatch: Attempting to talk on:
> /var/run/crm/cib_callback
> Aug 23 11:58:15 nodeB crmd: [6887]: debug: cib_native_signon_raw:
> Connection to CIB successful
> Aug 23 11:58:15 nodeB crmd: [6887]: info: do_cib_control: CIB connection
> established
> Aug 23 11:58:15 nodeB crmd: [6887]: debug: do_fsa_action: actions:trace:
> #011// A_HA_CONNECT
> Aug 23 11:58:15 nodeB crmd: [6887]: info: get_cluster_type: Cluster type
> is: 'openais'
> Aug 23 11:58:15 nodeB crmd: [6887]: notice: crm_cluster_connect: Connecting
> to cluster infrastructure: classic openais (with plugin)
> Aug 23 11:58:15 nodeB crmd: [6887]: info: init_ais_connection_classic:
> Creating connection to our Corosync plugin
> Aug 23 11:58:15 nodeB cib: [6882]: debug: cib_common_callback_worker:
> Setting cib_refresh_notify callbacks for 6887
> (0278a741-301f-4b7f-b80f-a8ca30aedd4a): on
> Aug 23 11:58:15 nodeB cib: [6882]: debug: cib_common_callback_worker:
> Setting cib_diff_notify callbacks for 6887
> (0278a741-301f-4b7f-b80f-a8ca30aedd4a): on
> Aug 23 11:58:15 nodeB crmd: [6887]: debug: init_ais_connection_classic:
> Adding fd=7 to mainloop
> Aug 23 11:58:15 nodeB crmd: [6887]: info: init_ais_connection_classic: AIS
> connection established
> Aug 23 11:58:15 nodeB rsyslogd-2177: imuxsock begins to drop messages from
> pid 6874 due to rate-limiting
> Aug 23 11:58:15 nodeB crmd: [6887]: info: get_ais_nodeid: Server details:
> id=1384196106 uname=nodeB cname=pcmk
> Aug 23 11:58:15 nodeB crmd: [6887]: info: init_ais_connection_once:
> Connection to 'classic openais (with plugin)': established
> Aug 23 11:58:15 nodeB crmd: [6887]: debug: crm_new_peer: Creating entry for
> node nodeB/1384196106
> Aug 23 11:58:15 nodeB crmd: [6887]: info: crm_new_peer: Node nodeB now has
> id: 1384196106
> Aug 23 11:58:15 nodeB crmd: [6887]: info: crm_new_peer: Node 1384196106 is
> now known as nodeB
> Aug 23 11:58:15 nodeB crmd: [6887]: info: ais_status_callback: status:
> nodeB is now unknown
> Aug 23 11:58:15 nodeB crmd: [6887]: info: do_ha_control: Connected to the
> cluster
> Aug 23 11:58:15 nodeB crmd: [6887]: debug: do_fsa_action: actions:trace:
> #011// A_READCONFIG
> Aug 23 11:58:15 nodeB crmd: [6887]: debug: do_fsa_action: actions:trace:
> #011// A_LRM_CONNECT
> Aug 23 11:58:15 nodeB crmd: [6887]: debug: do_lrm_control: Connecting to
> the LRM
> Aug 23 11:58:15 nodeB lrmd: [6884]: debug: on_msg_register:client crmd
> [6887] registered
> Aug 23 11:58:15 nodeB crmd: [6887]: debug: do_lrm_control: LRM connection
> established
> Aug 23 11:58:15 nodeB crmd: [6887]: debug: do_fsa_action: actions:trace:
> #011// A_CCM_CONNECT
> Aug 23 11:58:15 nodeB crmd: [6887]: debug: do_fsa_action: actions:trace:
> #011// A_STARTED
> Aug 23 11:58:15 nodeB crmd: [6887]: info: do_started: Delaying start, no
> membership data (0000000000100000)
> Aug 23 11:58:15 nodeB crmd: [6887]: debug: register_fsa_input_adv: Stalling
> the FSA pending further input: cause=C_FSA_INTERNAL
> Aug 23 11:58:15 nodeB crmd: [6887]: debug: s_crmd_fsa: Exiting the FSA:
> queue=0, fsa_actions=0x2, stalled=true
> Aug 23 11:58:15 nodeB crmd: [6887]: debug: config_query_callback: Call 3 :
> Parsing CIB options
> Aug 23 11:58:15 nodeB crmd: [6887]: debug: config_query_callback: Shutdown
> escalation occurs after: 1200000ms
> Aug 23 11:58:15 nodeB crmd: [6887]: debug: config_query_callback: Checking
> for expired actions every 900000ms
> Aug 23 11:58:15 nodeB crmd: [6887]: debug: config_query_callback: Sending
> expected-votes=2 to corosync
> Aug 23 11:58:15 nodeB crmd: [6887]: info: ais_dispatch_message: Membership
> 168: quorum still lost
> Aug 23 11:58:15 nodeB crmd: [6887]: notice: crmd_peer_update: Status
> update: Client nodeB/crmd now has status [online] (DC=<null>)
> Aug 23 11:58:15 nodeB crmd: [6887]: info: ais_status_callback: status:
> nodeB is now member (was unknown)
> Aug 23 11:58:15 nodeB crmd: [6887]: info: crm_update_peer: Node nodeB:
> id=1384196106 state=member (new) addr=r(0) ip(10.40.129.210) (new) votes=1
> (new) born=0 seen=168 proc=00000000000000000000000000111312 (new)
> Aug 23 11:58:15 nodeB crmd: [6887]: debug: post_cache_update: Updated cache
> after membership event 168.
> Aug 23 11:58:15 nodeB crmd: [6887]: debug: post_cache_update:
> post_cache_update added action A_ELECTION_CHECK to the FSA
> Aug 23 11:58:15 nodeB crmd: [6887]: info: ais_dispatch_message: Membership
> 168: quorum still lost
> Aug 23 11:58:15 nodeB crmd: [6887]: debug: do_fsa_action: actions:trace:
> #011// A_STARTED
> Aug 23 11:58:15 nodeB crmd: [6887]: debug: do_started: Init server comms
> Aug 23 11:58:15 nodeB crmd: [6887]: notice: do_started: The local CRM is
> operational
> Aug 23 11:58:15 nodeB crmd: [6887]: debug: do_fsa_action: actions:trace:
> #011// A_ELECTION_CHECK
> Aug 23 11:58:15 nodeB crmd: [6887]: debug: do_election_check: Ignore
> election check: we not in an election
> Aug 23 11:58:15 nodeB crmd: [6887]: debug: s_crmd_fsa: Processing
> I_PENDING: [ state=S_STARTING cause=C_FSA_INTERNAL origin=do_started ]
> Aug 23 11:58:15 nodeB crmd: [6887]: debug: do_fsa_action: actions:trace:
> #011// A_LOG
> Aug 23 11:58:15 nodeB crmd: [6887]: debug: do_log: FSA: Input I_PENDING
> from do_started() received in state S_STARTING
> Aug 23 11:58:15 nodeB crmd: [6887]: notice: do_state_transition: State
> transition S_STARTING -> S_PENDING [ input=I_PENDING cause=C_FSA_INTERNAL
> origin=do_started ]
> Aug 23 11:58:15 nodeB crmd: [6887]: debug: do_fsa_action: actions:trace:
> #011// A_INTEGRATE_TIMER_STOP
> Aug 23 11:58:15 nodeB crmd: [6887]: debug: do_fsa_action: actions:trace:
> #011// A_FINALIZE_TIMER_STOP
> Aug 23 11:58:15 nodeB crmd: [6887]: debug: do_fsa_action: actions:trace:
> #011// A_CL_JOIN_QUERY
> Aug 23 11:58:15 nodeB cib: [6882]: info: Managed write_cib_contents process
> 6891 exited with return code 0.
> Aug 23 11:58:16 nodeB stonith-ng: [6883]: debug:
> init_client_ipc_comms_nodispatch: Attempting to talk on: /var/run/crm/cib_rw
>
_______________________________________________ discuss mailing list discuss@xxxxxxxxxxxx http://lists.corosync.org/mailman/listinfo/discuss