Re: shutdown of corosync-notifyd results in shutdown ofpacemaker

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Problem solved.
It is an error in libqb.
In Solaris MSG_NOSIGNAL is not defined. In libqb SIGPIPE must be ignored or handled.
A pull request is waiting: https://github.com/asalkeld/libqb/pulls.

Andreas

-----Ursprüngliche Nachricht-----
Von: discuss-bounces@xxxxxxxxxxxx [mailto:discuss-bounces@xxxxxxxxxxxx] Im Auftrag von Grüninger, Andreas (LGL Extern)
Gesendet: Freitag, 12. Oktober 2012 16:29
An: discuss@xxxxxxxxxxxx
Betreff: Re:  shutdown of corosync-notifyd results in shutdown ofpacemaker

I compiled the the current master of
- libqb
- pacemaker
- corosync
in Solaris 11U7 (gcc 4.5.2) and openSuse 12.2 (gcc 4.7.1).

For the test a configuration with one node and no resources is used.

The start of corosync-notifyd is handled nearly in the same way in linux and solaris.
See below the first two listings. The name of the linux host is "linux-t7bi" and the name of the solaris host is "zd-sol-s1".
In linux the MAIN module logs something. In solaris the same log entries are written from the MON module.

When corosync-notifyd is killed linux handels this gracefully and there are no log entries of pacemaker.
In solaris the event is handled from pacemaker and not from corosync. There are no log entries for corosync. 
Pacemaker shuts down and corosync is still running and healthy. If pacemaker is restarted it connects again to corosync.

How can this be when the same source code is compiled?

Andreas

Log from linux after start of corosync-notifyd ....
Oct 12 14:21:03 [20166] linux-t7bi corosync debug   [QB    ] ipc_us.c:handle_new_connection:666 IPC credentials authenticated (20167-20287-30)
Oct 12 14:21:03 [20166] linux-t7bi corosync debug   [QB    ] ipc_us.c:qb_ipcs_us_connect:978 connecting to client (20167-20287-30)
Oct 12 14:21:03 [20166] linux-t7bi corosync debug   [MAIN  ] ipc_glue.c:cs_ipcs_connection_created:269 connection created
Oct 12 14:21:03 [20166] linux-t7bi corosync debug   [QB    ] cmap.c:cmap_lib_init_fn:306 lib_init_fn: conn=0x7febfe95a260
Oct 12 14:21:03 [20166] linux-t7bi corosync debug   [QB    ] ipc_us.c:handle_new_connection:666 IPC credentials authenticated (20167-20287-32)
Oct 12 14:21:03 [20166] linux-t7bi corosync debug   [QB    ] ipc_us.c:qb_ipcs_us_connect:978 connecting to client (20167-20287-32)
Oct 12 14:21:03 [20166] linux-t7bi corosync debug   [MAIN  ] ipc_glue.c:cs_ipcs_connection_created:269 connection created
Oct 12 14:21:03 [20166] linux-t7bi corosync debug   [QUORUM] vsf_quorum.c:quorum_lib_init_fn:316 lib_init_fn: conn=0x7febfe95cd70
Oct 12 14:21:03 [20166] linux-t7bi corosync debug   [QUORUM] vsf_quorum.c:message_handler_req_lib_quorum_gettype:471 got quorum_type request on 0x7febfe95cd70
Oct 12 14:21:03 [20166] linux-t7bi corosync debug   [QUORUM] vsf_quorum.c:message_handler_req_lib_quorum_trackstart:412 got trackstart request on 0x7febfe95cd70
Oct 12 14:21:03 [20166] linux-t7bi corosync debug   [QUORUM] vsf_quorum.c:message_handler_req_lib_quorum_trackstart:420 sending initial status to 0x7febfe95cd70
Oct 12 14:21:03 [20166] linux-t7bi corosync debug   [QUORUM] vsf_quorum.c:send_library_notification:359 sending quorum notification to 0x7febfe95cd70, length = 52
Oct 12 14:21:03 [20166] linux-t7bi corosync debug   [QB    ] ipc_us.c:handle_new_connection:666 IPC credentials authenticated (20167-20287-34)
Oct 12 14:21:03 [20166] linux-t7bi corosync debug   [QB    ] ipc_us.c:qb_ipcs_us_connect:978 connecting to client (20167-20287-34)
Oct 12 14:21:03 [20166] linux-t7bi corosync debug   [MAIN  ] ipc_glue.c:cs_ipcs_connection_created:269 connection created
Oct 12 14:21:03 [20166] linux-t7bi corosync debug   [QB    ] ipcs.c:qb_ipcs_dispatch_connection_request:716 HUP conn (20167-20287-34)
Oct 12 14:21:03 [20166] linux-t7bi corosync debug   [QB    ] ipcs.c:qb_ipcs_disconnect:555 qb_ipcs_disconnect(20167-20287-34) state:2
Oct 12 14:21:03 [20166] linux-t7bi corosync debug   [MAIN  ] ipc_glue.c:cs_ipcs_connection_closed:414 cs_ipcs_connection_closed()
Oct 12 14:21:03 [20166] linux-t7bi corosync debug   [MAIN  ] ipc_glue.c:cs_ipcs_connection_destroyed:387 cs_ipcs_connection_destroyed()
....

Log from solaris after start of corosync-notifyd ....
Oct 12 14:20:35 [20152] zd-sol-s1 corosync debug   [QB    ] ipc_us.c:handle_new_connection:666 IPC credentials authenticated (20153-20181-34)
Oct 12 14:20:35 [20152] zd-sol-s1 corosync debug   [QB    ] ipc_us.c:qb_ipcs_us_connect:978 connecting to client (20153-20181-34)
Oct 12 14:20:35 [20152] zd-sol-s1 corosync debug   [MON   ] ipc_glue.c:cs_ipcs_connection_created:269 connection created
Oct 12 14:20:35 [20152] zd-sol-s1 corosync debug   [MON   ] cmap.c:cmap_lib_init_fn:181 lib_init_fn: conn=84433e8
Oct 12 14:20:35 [20152] zd-sol-s1 corosync debug   [QB    ] ipc_us.c:handle_new_connection:666 IPC credentials authenticated (20153-20181-36)
Oct 12 14:20:35 [20152] zd-sol-s1 corosync debug   [QB    ] ipc_us.c:qb_ipcs_us_connect:978 connecting to client (20153-20181-36)
Oct 12 14:20:35 [20152] zd-sol-s1 corosync debug   [MON   ] ipc_glue.c:cs_ipcs_connection_created:269 connection created
Oct 12 14:20:35 [20152] zd-sol-s1 corosync debug   [MON   ] vsf_quorum.c:quorum_lib_init_fn:316 lib_init_fn: conn=840fcb8
Oct 12 14:20:35 [20152] zd-sol-s1 corosync debug   [MON   ] vsf_quorum.c:message_handler_req_lib_quorum_gettype:471 got quorum_type request on 840fcb8
Oct 12 14:20:35 [20152] zd-sol-s1 corosync debug   [MON   ] vsf_quorum.c:message_handler_req_lib_quorum_trackstart:412 got trackstart request on 840fcb8
Oct 12 14:20:35 [20152] zd-sol-s1 corosync debug   [MON   ] vsf_quorum.c:message_handler_req_lib_quorum_trackstart:420 sending initial status to 840fcb8
Oct 12 14:20:35 [20152] zd-sol-s1 corosync debug   [MON   ] vsf_quorum.c:send_library_notification:359 sending quorum notification to 840fcb8, length = 56
Oct 12 14:20:35 [20152] zd-sol-s1 corosync debug   [QB    ] ipc_us.c:handle_new_connection:666 IPC credentials authenticated (20153-20181-38)
Oct 12 14:20:35 [20152] zd-sol-s1 corosync debug   [QB    ] ipc_us.c:qb_ipcs_us_connect:978 connecting to client (20153-20181-38)
Oct 12 14:20:35 [20152] zd-sol-s1 corosync debug   [MON   ] ipc_glue.c:cs_ipcs_connection_created:269 connection created
Oct 12 14:20:35 [20152] zd-sol-s1 corosync debug   [QB    ] ipc_us.c:qb_ipc_us_recv_at_most:326 recv(fd 38) got 0 bytes assuming ENOTCONN
Oct 12 14:20:35 [20152] zd-sol-s1 corosync debug   [QB    ] ipcs.c:qb_ipcs_disconnect:555 qb_ipcs_disconnect(20153-20181-38) state:2
Oct 12 14:20:35 [20152] zd-sol-s1 corosync debug   [MON   ] ipc_glue.c:cs_ipcs_connection_closed:414 cs_ipcs_connection_closed()
Oct 12 14:20:35 [20152] zd-sol-s1 corosync debug   [MON   ] ipc_glue.c:cs_ipcs_connection_destroyed:387 cs_ipcs_connection_destroyed()
....



Log from linux after stop of corosync-notifyd ....
Oct 12 14:22:59 [20166] linux-t7bi corosync debug   [QB    ] ipcs.c:qb_ipcs_dispatch_connection_request:716 HUP conn (20167-20287-32)
Oct 12 14:22:59 [20166] linux-t7bi corosync debug   [QB    ] ipcs.c:qb_ipcs_disconnect:555 qb_ipcs_disconnect(20167-20287-32) state:2
Oct 12 14:22:59 [20166] linux-t7bi corosync debug   [MAIN  ] ipc_glue.c:cs_ipcs_connection_closed:414 cs_ipcs_connection_closed()
Oct 12 14:22:59 [20166] linux-t7bi corosync debug   [QUORUM] vsf_quorum.c:quorum_lib_exit_fn:328 lib_exit_fn: conn=0x7febfe95cd70
Oct 12 14:22:59 [20166] linux-t7bi corosync error   [MAIN  ] ipc_glue.c:msg_send_or_queue:526 event_send retuned -32, expected 340!
Oct 12 14:22:59 [20166] linux-t7bi corosync error   [MAIN  ] ipc_glue.c:msg_send_or_queue:526 event_send retuned -32, expected 344!
Oct 12 14:22:59 [20166] linux-t7bi corosync error   [MAIN  ] ipc_glue.c:msg_send_or_queue:526 event_send retuned -32, expected 340!
Oct 12 14:22:59 [20166] linux-t7bi corosync error   [MAIN  ] ipc_glue.c:msg_send_or_queue:526 event_send retuned -32, expected 344!
Oct 12 14:22:59 [20166] linux-t7bi corosync error   [MAIN  ] ipc_glue.c:msg_send_or_queue:526 event_send retuned -32, expected 344!
Oct 12 14:22:59 [20166] linux-t7bi corosync error   [MAIN  ] ipc_glue.c:msg_send_or_queue:526 event_send retuned -32, expected 352!
Oct 12 14:22:59 [20166] linux-t7bi corosync error   [MAIN  ] ipc_glue.c:msg_send_or_queue:526 event_send retuned -32, expected 344!
Oct 12 14:22:59 [20166] linux-t7bi corosync error   [MAIN  ] ipc_glue.c:msg_send_or_queue:526 event_send retuned -32, expected 340!
Oct 12 14:22:59 [20166] linux-t7bi corosync error   [MAIN  ] ipc_glue.c:msg_send_or_queue:526 event_send retuned -32, expected 344!
Oct 12 14:22:59 [20166] linux-t7bi corosync error   [MAIN  ] ipc_glue.c:msg_send_or_queue:526 event_send retuned -32, expected 344!
Oct 12 14:22:59 [20166] linux-t7bi corosync error   [MAIN  ] ipc_glue.c:msg_send_or_queue:526 event_send retuned -32, expected 344!
Oct 12 14:22:59 [20166] linux-t7bi corosync error   [MAIN  ] ipc_glue.c:msg_send_or_queue:526 event_send retuned -32, expected 344!
Oct 12 14:22:59 [20166] linux-t7bi corosync error   [MAIN  ] ipc_glue.c:msg_send_or_queue:526 event_send retuned -32, expected 340!
Oct 12 14:22:59 [20166] linux-t7bi corosync debug   [MAIN  ] ipc_glue.c:cs_ipcs_connection_destroyed:387 cs_ipcs_connection_destroyed()
Oct 12 14:22:59 [20166] linux-t7bi corosync debug   [QB    ] ipcs.c:qb_ipcs_dispatch_connection_request:716 HUP conn (20167-20287-30)
Oct 12 14:22:59 [20166] linux-t7bi corosync debug   [QB    ] ipcs.c:qb_ipcs_disconnect:555 qb_ipcs_disconnect(20167-20287-30) state:2
Oct 12 14:22:59 [20166] linux-t7bi corosync debug   [MAIN  ] ipc_glue.c:cs_ipcs_connection_closed:414 cs_ipcs_connection_closed()
Oct 12 14:22:59 [20166] linux-t7bi corosync debug   [QB    ] cmap.c:cmap_lib_exit_fn:325 exit_fn for conn=0x7febfe95a260
Oct 12 14:22:59 [20166] linux-t7bi corosync debug   [MAIN  ] ipc_glue.c:cs_ipcs_connection_destroyed:387 cs_ipcs_connection_destroyed()
....

Log from solaris after stop of corosync-notifyd ....
Oct 12 14:24:05 [20157] pacemakerd:    debug: qb_ipc_us_recv_at_most:   recv(fd 7) got 0 bytes assuming ENOTCONN
Oct 12 14:24:05 [20157] pacemakerd:    debug: _check_connection_state:  interpreting result -134 as a disconnect: Transport endpoint is not connected (134)
Oct 12 14:24:05 [20157] pacemakerd:    error: cfg_connection_destroy:   Connection destroyed
Oct 12 14:24:05 [20157] pacemakerd:   notice: pcmk_shutdown_worker:     Shuting down Pacemaker
Oct 12 14:24:05 [20157] pacemakerd:   notice: stop_child:       Stopping crmd: Sent -15 to process 20163
Oct 12 14:24:05 [20157] pacemakerd:    debug: qb_ipc_us_recv_at_most:   recv(fd 9) got 0 bytes assuming ENOTCONN
Oct 12 14:24:05 [20157] pacemakerd:    debug: _check_connection_state:  interpreting result -134 as a disconnect: Transport endpoint is not connected (134)
Oct 12 14:24:05 [20157] pacemakerd:    error: cpg_connection_destroy:   Connection destroyed
Oct 12 14:24:05 [20159] stonith-ng:    debug: qb_ipc_us_recv_at_most:   recv(fd 6) got 0 bytes assuming ENOTCONN
Oct 12 14:24:05 [20159] stonith-ng:    debug: _check_connection_state:  interpreting result -134 as a disconnect: Transport endpoint is not connected (134)
Oct 12 14:24:05 [20159] stonith-ng:    error: pcmk_cpg_dispatch:        Connection to the CPG API failed: 2
Oct 12 14:24:05 [20163]       crmd:     info: crm_signal_dispatch:      Invoking handler for signal 15: Terminated
Oct 12 14:24:05 [20159] stonith-ng:    error: stonith_peer_ais_destroy:         AIS connection terminated
Oct 12 14:24:05 [20163]       crmd:   notice: crm_shutdown:     Requesting shutdown, upper limit is 1200000ms
Oct 12 14:24:05 [20159] stonith-ng:     info: stonith_shutdown:         Terminating with  1 clients
Oct 12 14:24:05 [20163]       crmd:    debug: crm_timer_start:  Started Shutdown Escalation (I_STOP:1200000ms), src=21
Oct 12 14:24:05 [20159] stonith-ng:    debug: cib_native_signoff:       Signing out of the CIB Service
Oct 12 14:24:05 [20161]      attrd:    debug: qb_ipc_us_recv_at_most:   recv(fd 6) got 0 bytes assuming ENOTCONN
Oct 12 14:24:05 [20163]       crmd:    debug: s_crmd_fsa:       Processing I_SHUTDOWN: [ state=S_NOT_DC cause=C_SHUTDOWN origin=crm_shutdown ]
Oct 12 14:24:05 [20159] stonith-ng:    debug: qb_ipcc_disconnect:       qb_ipcc_disconnect()
 

-----Ursprüngliche Nachricht-----
Von: Andrew Beekhof [mailto:andrew@xxxxxxxxxxx]
Gesendet: Freitag, 12. Oktober 2012 01:58
An: Grüninger, Andreas (LGL Extern)
Cc: discuss@xxxxxxxxxxxx
Betreff: Re:  shutdown of corosync-notifyd results in shutdown of pacemaker

More specifically, stopping corosync-notifyd results in all Pacemaker's connections to Corosync being terminated.
Andreas:  Did you test this on linux or solaris only?

On Thu, Oct 11, 2012 at 11:45 PM, Grüninger, Andreas (LGL Extern) <Andreas.Grueninger@xxxxxxxxxx> wrote:
> When I start
> corosync-notifyd -f -l -s -m <MONITORINGSERVER> and close it with 
> CTRL-C, pacemaker make a shutdown.
> Please see below for the details.
>
> I compiled the current master of corosync (tag 2.1.0)  and the current master of pacemaker.
> The OS is Solaris 11U7.
>
> Is this a feature or a bug?
> In Solaris libqb must be patched to avoid errors.
> Please see
> https://lists.fedorahosted.org/pipermail/quarterback-devel/2012-September/000921.html "[PATCH] -ENOTCONN handled as error when client disconnects"
> Maybe this patch should not deliver -ESHUTDOWN when a client disconnects.
> IMHO this is the adaequate result.
>
> Andreas
>
>
> On Thu, Oct 4, 2012 at 5:57 PM, Grüninger, Andreas (LGL Extern) <Andreas.Grueninger@xxxxxxxxxx> wrote:
>>>> Is this an error or the desired result?
>>
>>>Based on the logs, pacemaker thinks corosync died.  Did that happen?
>>>If so there is not much pacemaker can do :-(
>>
>> And that is absolutely ok when corosync dies.
>> Corosync does not die but is still healthy.
>> It is corosync-notifyd which is started additionally to corosync as a separate process and which is finished with kill as daemon or with ctrl-c as foreground process.
>> The job of corosync-notifyd is sending of SNMP traps.
>> This is the functionality of crm_mon -C .. -S ... for pacemaker.
>>
>> So corosync-notifyd sends the wrong signal or pacemaker does a little bit too much.
>> Pacemaker should just ignore this ending connection.
>
> All the Pacemaker daemons are being told, by Corosync itself, that their connections to Corosync are dead.
> Its a little difficult to ignore that.
>
>> Is there a chance in pacemaker or should should this better solved in corosync/corosync-notifyd?
>
> It needs to be addressed in corosync/corosync-notifyd.
> Corosync's CPG library is the one invoking our
> cpg_connection_destroy() callback.
>
>>
>> Andreas
>>
>> -----Ursprüngliche Nachricht-----
>> Von: Andrew Beekhof [mailto:andrew@xxxxxxxxxxx]
>> Gesendet: Mittwoch, 3. Oktober 2012 01:09
>> An: The Pacemaker cluster resource manager
>> Betreff: Re: [Pacemaker] Exiting corosync-notifyd results in shutting 
>> downof pacemakerd
>>
>> On Wed, Oct 3, 2012 at 2:51 AM, Grüninger, Andreas (LGL Extern) <Andreas.Grueninger@xxxxxxxxxx> wrote:
>>> I am currently investigating the monitoring of corosync/pacemaker with snmp.
>>> crm_mon used with the OCF resource ClusterMon works as it should.
>>>
>>> But corosync-notifyd can't be used in our case.
>>> I start corosync-notifyd in the foreground as follows 
>>> corosync-notifyd -f -l -s  -m 10.50.235.1
>>>
>>> When I stop the running corosync-notifyd with CTRL-C, pacemaker shuts down with the following entries in the logfile.
>>> Is this an error or the desired result?
>>
>> Based on the logs, pacemaker thinks corosync died.  Did that happen?
>> If so there is not much pacemaker can do :-(
>>
>>>
>>> ....
>>> Oct 02 18:42:19 [27126] pacemakerd:    error: cfg_connection_destroy:   Connection destroyed
>>> Oct 02 18:42:19 [27126] pacemakerd:   notice: pcmk_shutdown_worker:     Shuting down Pacemaker
>>> Oct 02 18:42:19 [27126] pacemakerd:   notice: stop_child:       Stopping crmd: Sent -15 to process 27177
>>> Oct 02 18:42:19 [27126] pacemakerd:    error: cpg_connection_destroy:   Connection destroyed
>>> Oct 02 18:42:19 [27177]       crmd:     info: crm_signal_dispatch:      Invoking handler for signal 15: Terminated
>>> Oct 02 18:42:19 [27177]       crmd:   notice: crm_shutdown:     Requesting shutdown, upper limit is 1200000ms
>>> Oct 02 18:42:19 [27128] stonith-ng:    error: pcmk_cpg_dispatch:        Connection to the CPG API failed: 2
>>> Oct 02 18:42:19 [27177]       crmd:     info: do_shutdown_req:  Sending shutdown request to zd-sol-s1-v61
>>> Oct 02 18:42:19 [27128] stonith-ng:    error: stonith_peer_ais_destroy:         AIS connection terminated
>>> Oct 02 18:42:19 [27128] stonith-ng:     info: stonith_shutdown:         Terminating with  1 clients
>>> Oct 02 18:42:19 [27130]      attrd:    error: pcmk_cpg_dispatch:        Connection to the CPG API failed: 2
>>> Oct 02 18:42:19 [27130]      attrd:     crit: attrd_ais_destroy:        Lost connection to Corosync service!
>>> Oct 02 18:42:19 [27130]      attrd:   notice: main:     Exiting...
>>> Oct 02 18:42:19 [27130]      attrd:   notice: main:     Disconnecting client 81ffc38, pid=27177...
>>> Oct 02 18:42:19 [27128] stonith-ng:     info: qb_ipcs_us_withdraw:      withdrawing server sockets
>>> Oct 02 18:42:19 [27128] stonith-ng:     info: crm_xml_cleanup:  Cleaning up memory from libxml2
>>> Oct 02 18:42:19 [27130]      attrd:    error: attrd_cib_connection_destroy:     Connection to the CIB terminated...
>>> Oct 02 18:42:19 [27127]        cib:    error: pcmk_cpg_dispatch:        Connection to the CPG API failed: 2
>>> Oct 02 18:42:19 [27127]        cib:    error: cib_ais_destroy:  Corosync connection lost!  Exiting.
>>> Oct 02 18:42:19 [27129]       lrmd:     info: lrmd_ipc_destroy:         LRMD client disconnecting 807e768 - name: crmd id: 1d659f61-d6e2-4ef3-f674-b9a8ba8029e8
>>> Oct 02 18:42:19 [27127]        cib:     info: terminate_cib:    cib_ais_destroy: Exiting fast...
>>> Oct 02 18:42:19 [27127]        cib:     info: qb_ipcs_us_withdraw:      withdrawing server sockets
>>> Oct 02 18:42:19 [27127]        cib:     info: qb_ipcs_us_withdraw:      withdrawing server sockets
>>> Oct 02 18:42:19 [27127]        cib:     info: qb_ipcs_us_withdraw:      withdrawing server sockets
>>> Oct 02 18:42:19 [27126] pacemakerd:    error: pcmk_child_exit:  Child process attrd exited (pid=27130, rc=1)
>>> Oct 02 18:42:19 [27126] pacemakerd:    error: send_cpg_message:         Sending message via cpg FAILED: (rc=9) Bad handle
>>> Oct 02 18:42:19 [27126] pacemakerd:    error: pcmk_child_exit:  Child process cib exited (pid=27127, rc=64)
>>> Oct 02 18:42:19 [27126] pacemakerd:    error: send_cpg_message:         Sending message via cpg FAILED: (rc=9) Bad handle
>>> Oct 02 18:42:19 [27126] pacemakerd:   notice: pcmk_child_exit:  Child process crmd terminated with signal 13 (pid=27177, core=0)
>>> Oct 02 18:42:19 [27126] pacemakerd:    error: send_cpg_message:         Sending message via cpg FAILED: (rc=9) Bad handle
>>> Oct 02 18:42:19 [27126] pacemakerd:   notice: stop_child:       Stopping pengine: Sent -15 to process 27131
>>> Oct 02 18:42:19 [27126] pacemakerd:     info: pcmk_child_exit:  Child process pengine exited (pid=27131, rc=0)
>>> Oct 02 18:42:19 [27126] pacemakerd:    error: send_cpg_message:         Sending message via cpg FAILED: (rc=9) Bad handle
>>> Oct 02 18:42:19 [27126] pacemakerd:   notice: stop_child:       Stopping lrmd: Sent -15 to process 27129
>>> Oct 02 18:42:19 [27129]       lrmd:     info: crm_signal_dispatch:      Invoking handler for signal 15: Terminated
>>> Oct 02 18:42:19 [27129]       lrmd:     info: lrmd_shutdown:    Terminating with  0 clients
>>> Oct 02 18:42:19 [27129]       lrmd:     info: qb_ipcs_us_withdraw:      withdrawing server sockets
>>> Oct 02 18:42:19 [27126] pacemakerd:     info: pcmk_child_exit:  Child process lrmd exited (pid=27129, rc=0)
>>> Oct 02 18:42:19 [27126] pacemakerd:    error: send_cpg_message:         Sending message via cpg FAILED: (rc=9) Bad handle
>>> Oct 02 18:42:19 [27126] pacemakerd:   notice: stop_child:       Stopping stonith-ng: Sent -15 to process 27128
>>> Oct 02 18:42:19 [27126] pacemakerd:   notice: pcmk_child_exit:  Child process stonith-ng terminated with signal 11 (pid=27128, core=128)
>>> Oct 02 18:42:19 [27126] pacemakerd:    error: send_cpg_message:         Sending message via cpg FAILED: (rc=9) Bad handle
>>> Oct 02 18:42:19 [27126] pacemakerd:   notice: pcmk_shutdown_worker:     Shutdown complete
>>> Oct 02 18:42:19 [27126] pacemakerd:     info: qb_ipcs_us_withdraw:      withdrawing server sockets
>>> Oct 02 18:42:19 [27126] pacemakerd:     info: main:     Exiting pacemakerd
>>>
>>> Andreas
>>>
>>> _______________________________________________
>>> Pacemaker mailing list: Pacemaker@xxxxxxxxxxxxxxxxxxx 
>>> http://oss.clusterlabs.org/mailman/listinfo/pacemaker
>>>
>>> Project Home: http://www.clusterlabs.org Getting started:
>>> http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
>>> Bugs: http://bugs.clusterlabs.org
>>
>> _______________________________________________
>> Pacemaker mailing list: Pacemaker@xxxxxxxxxxxxxxxxxxx 
>> http://oss.clusterlabs.org/mailman/listinfo/pacemaker
>>
>> Project Home: http://www.clusterlabs.org Getting started:
>> http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
>> Bugs: http://bugs.clusterlabs.org
>>
>> _______________________________________________
>> Pacemaker mailing list: Pacemaker@xxxxxxxxxxxxxxxxxxx 
>> http://oss.clusterlabs.org/mailman/listinfo/pacemaker
>>
>> Project Home: http://www.clusterlabs.org Getting started:
>> http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
>> Bugs: http://bugs.clusterlabs.org
>
> _______________________________________________
> Pacemaker mailing list: Pacemaker@xxxxxxxxxxxxxxxxxxx 
> http://oss.clusterlabs.org/mailman/listinfo/pacemaker
>
> Project Home: http://www.clusterlabs.org Getting started: 
> http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
> Bugs: http://bugs.clusterlabs.org
>
>
> _______________________________________________
> discuss mailing list
> discuss@xxxxxxxxxxxx
> http://lists.corosync.org/mailman/listinfo/discuss

_______________________________________________
discuss mailing list
discuss@xxxxxxxxxxxx
http://lists.corosync.org/mailman/listinfo/discuss

_______________________________________________
discuss mailing list
discuss@xxxxxxxxxxxx
http://lists.corosync.org/mailman/listinfo/discuss



[Index of Archives]     [Linux Clusters]     [Corosync Project]     [Linux USB Devel]     [Linux Audio Users]     [Photo]     [Yosemite News]    [Yosemite Photos]    [Linux Kernel]     [Linux SCSI]     [X.Org]

  Powered by Linux