Chrissie,
patch looks generally good, but is there a reason to add new library
call instead of tracking "quorum.wait_for_all" and if set to 0, execute
code very similar to message_handler_req_lib_votequorum_cancel_wait_for_all?
But if we decide to go with library call, there must be few things fixed:
- version can be 7.1.0. We are adding call, not changing existing one
(so it's backwards compatible)
- We have to have support in cfgtool/quorumtool/... Keep in mind, that
main user (pcs) is not calling corosync API directly, but they are using
CLI tools.
- There should be check if wait_for_all is really activated.
All these things would be solved by tracking "quorum_wait_for_all" for free.
Regards,
Honza
Christine Caulfield napsal(a):
It's possible in a two_node cluster (and others but it's more likely
with just two) that a node could be booted up after downtime or failure
and the other node is not available for some reason. In this case it
would not be allowed to proceed because wait_for_all is enforced.
This patch provides an API call to clear this flag in the desperate
situation where that becomes necessary. It should only be used with
extreme caution and will be wrapped up in pcs which should also check
that fencing has been run.
Signed-Off-By: Christine Caulfield <ccaulfie@xxxxxxxxxx>
_______________________________________________
discuss mailing list
discuss@xxxxxxxxxxxx
http://lists.corosync.org/mailman/listinfo/discuss
_______________________________________________
discuss mailing list
discuss@xxxxxxxxxxxx
http://lists.corosync.org/mailman/listinfo/discuss