Hi, Nop, The shared storage is provided
by IBM SVC (SAN Volume Controller) through Qlogic 24xx HBA cards. The switches
are Brocade 48000. Devices are created on top of
dm-multipath devices. I really think this has something
to do with the fence daemon since i am unable to leave the fence domain
gracefully on a cold boot of the whole cluster. Thanks, Finnur From: linux-cluster-bounces@xxxxxxxxxx
[mailto:linux-cluster-bounces@xxxxxxxxxx] On Behalf Of Theophanis
Kontogiannis Hi Finnur, The LV is running on top of DRBD? Please provide us with a bit more details. Thank you, Theophanis Kontogiannis. From:
linux-cluster-bounces@xxxxxxxxxx [mailto:linux-cluster-bounces@xxxxxxxxxx] On
Behalf Of Finnur Orn Gu?mundsson - TM Software Hi, I have a 2 node cluster running
RHEL 5.1 x86_64 and fully patched as of today. If i cold-boot the cluster (both
nodes) everything comes up smoothly and i can migrate services between nodes
etc... However when i take one node down
i am having difficultys leaving the fence domain. If i kill the fence daemon on
the node i am trying to remove gracefully or use cman_tool leave force and
reboot it, it comes back up, cman starts and it appears to join the cluster.
The CLVMD init script hangs (just sits and hangs) and rgmanager does not start
up correctly. Also CLVMD and rgmanager just sit in a zombie state and i have to
poweroff or fence the node to get it to reboot.... The cluster never stabilizes it
self until i cold boot both nodes. Then it is OK until the next reboot. I have
read something about similar cases but did not find any magic solution! ;) My cluster.conf is attached. There is no firewall running on
the machines in question (chkconfig iptables off;). Various output from the the that
is rebooted: Output from group_tool services: type
level name
id state fence
0 default 00000000 JOIN_STOP_WAIT [1 2] dlm
1 rgmanager 00000000 JOIN_STOP_WAIT [1 2] Output from group_tool fenced: 1210193027 our_nodeid 1 our_name
node-16 1210193027 listen 4 member 5
groupd 7 1210193029 client 3: join default 1210193029 delay post_join 120s
post_fail 0s 1210193029 added 2 nodes from ccs 1210193542 client 3: dump Various output from the other
node: Output from group_tool services: type
level name
id state fence
0 default 00010002 JOIN_START_WAIT [1 2] dlm
1 clvmd 00020002 none [2] dlm
1 rgmanager 00030002 FAIL_ALL_STOPPED [1 2] Output from group_tool dump
fenced: 1210191957 our_nodeid 2 our_name
node-17 1210191957 listen 4 member 5
groupd 7 1210191958 client 3: join default 1210191958 delay post_join 120s
post_fail 0s 1210191958 added 2 nodes from ccs 1210191958 setid default 65538 1210191958 start default 1
members 2 1210191958 do_recovery stop 0
start 1 finish 0 1210191958 node
"node-16" not a cman member, cn 1 1210191958 add first victim
node-16 1210191959 node
"node-16" not a cman member, cn 1 1210191960 node
"node-16" not a cman member, cn 1 1210191961 node
"node-16" not a cman member, cn 1 1210191962 node
"node-16" not a cman member, cn 1 1210191963 node
"node-16" not a cman member, cn 1 1210191964 node
"node-16" not a cman member, cn 1 1210191965 node
"node-16" not a cman member, cn 1 1210191966 node
"node-16" not a cman member, cn 1 1210191967 node
"node-16" not a cman member, cn 1 1210191968 node
"node-16" not a cman member, cn 1 1210191969 node
"node-16" not a cman member, cn 1 1210191970 node
"node-16" not a cman member, cn 1 1210191971 node
"node-16" not a cman member, cn 1 1210191972 node
"node-16" not a cman member, cn 1 1210191973 node
"node-16" not a cman member, cn 1 1210191974 reduce victim node-16 1210191974 delay of 16s leaves 0
victims 1210191974 finish default 1 1210191974 stop default 1210191974 start default 2
members 1 2 1210191974 do_recovery stop 1
start 2 finish 1 1210193633 client 3: dump Thanks in advanced. Kær kveðja / Best Regards, |
-- Linux-cluster mailing list Linux-cluster@xxxxxxxxxx https://www.redhat.com/mailman/listinfo/linux-cluster