Alex,
1.
Thank
you very much.
The
Cisco setup is very useful and the commands for testing multicast as
well.
2
Loking
at your cluster.conf, I would have thought that any limits on dlm and gfs lock
rates are counterproductive in the days of multicore CPUs and GbE. They
should be unlimited in my opinion. Under high load the limiting factor
will be saturation of one core by gfs control daemon.
Regards,
Chris
Jankowski
Hi Chris,
for the switches ports stuff check out this
url: http://www.openais.org/doku.php?id=faq:cisco_switches
We
have finally configured an internal (private) VLAN joining there one NIC of
each blade server. Now all cluster related traffic goes through those
interfaces (eth2 at both servers in our case), including the traffic generated
by the lock_dlm of the GFS2 filesystem, just created.
To check
multicast connectivity, these are two very useful commands, "nc -u -vvn -z
<multicast_IP> 5405" to generate some multicast udp traffic and "tcpdump
-i eth2 ether multicast" to check it from the other node. (eth2 in my
particular case, of course).
I have been playing a little with the
lock_dlm, but here is how my cluster.conf looks now:
<?xml
version="1.0"?> <cluster config_version="7"
name="VCluster"> <fence_daemon post_fail_delay="0"
post_join_delay="25"/>
<clusternodes> <clusternode
name="nodeaint" nodeid="1" votes="1">
<multicast addr="239.0.0.1"
interface="eth2"/>
<fence>
<method
name="1">
<device
name="nodeaiLO"/>
</method>
</fence>
</clusternode>
<clusternode name="nodebint" nodeid="2"
votes="1">
<multicast addr="239.0.0.1" interface="eth2"/>
<fence>
<method
name="1">
<device
name="nodebiLO"/>
</method>
</fence>
</clusternode>
</clusternodes> <cman expected_votes="1"
two_node="1"> <multicast
addr="239.0.0.1"/>
</cman> <fencedevices>
<fencedevice agent="fence_ilo" hostname="nodeacn"
login="user" name="nodeaiLO" passwd="hp"/>
<fencedevice agent="fence_ilo" hostname="nodebcn"
login="user" name="nodebiLO" passwd="hp"/>
</fencedevices> <rm>
<failoverdomains/>
<resources/>
</rm> <dlm plock_ownership="1"
plock_rate_limit="500"/> <gfs_controld
plock_rate_limit="500"/> </cluster>
Next
thing to add... I'm going to play a little with the quorum
devices.
Hope it helps!
Alex
On 04/16/2010 05:00 PM,
Jankowski, Chris wrote:
eparate the cluster
interconne
|
--
Linux-cluster mailing list
Linux-cluster@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/linux-cluster