RE: Cluster Networks

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Mon, 2009-03-30 at 12:43 -0400, Jeff Sturm wrote:
> > -----Original Message-----
> > From: linux-cluster-bounces@xxxxxxxxxx 
> > [mailto:linux-cluster-bounces@xxxxxxxxxx] On Behalf Of Paul Dugas
> > Sent: Monday, March 30, 2009 7:06 AM
> > To: Linux-Cluster Mailing List
> > Subject:  Cluster Networks
> > 
> > I've a few machines sharing a couple GFS/LVM volumes that are 
> > physically on an AOE device.  Each machine has two network 
> > interfaces; LAN and the AOE SAN.  I don't have IP addresses 
> > on the SAN interfaces so the cluster is communicating via the LAN.
> > 
> > Is this ideal or should I configure them to use the SAN 
> > interfaces instead?  
> 
> It depends.  Is it your wish to maximize throughput or availability?
> 
> One consideration is MTU.  Given a standard blocksize of 4k on Linux,
> AoE initiators benefit from jumbo frames, since a complete block can be
> delivered in one packet.  On the other hand, packets from
> openais/lock_dlm are generally quite small and do not fragment in a
> normal MTU.
> 
> If you are able to run jumbo frames on all your network interfaces, AoE
> can use any interface and benefit from the extra thoughput.  If however
> your switch ports are not configured for jumbo frames, you may be better
> off keeping separate interfaces for the two, unless the additional
> throughput isn't important to you.
> 
> For maximum uptime, you can multipath AoE over two interfaces, so that
> if a single interface were to fail, traffic will resume on the other.
> Multipath isn't available for openais (I believe it is implemented but
> not supported) but you can run a bonded ethernet interface to achieve
> similar results.  An active/passive bonded pair connected to two
> separate switches would give you protection from failure of a single
> switch/cable/iface, which is very nice for a cluster, because you can
> design the network for no single point of failure (depending also on
> your power configuration).
> 
> If you can run both the SAN/LAN on jumbo frames, and multipath AoE, you
> can get very nice throughput.  With the latest AoE driver, an updated
> e1000 driver, and some network tuning, we can sustain 190MB/s AoE
> transfers on our test network.

Availability is not my concern here but I appreciate the info.  I
maintain a physically separate SAN (separate switch) for the AoE traffic
with jumbo frames enabled and those interfaces are already doubled up
supporting AoE multipath.  That network is all Etherent, no IP.

My question is more aimed at cluster stability and consistency.  Members
are monitoring each other via IP traffic over their LAN interfaces and
I'm wondering if that is the correct way for the cluster to operate.
I'm not familiar with the internals of the clustering software at all
but I had a thought that since the cluster is solely in place to utilize
shared GFS volumes, is it best that they monitor each other via the same
network they access the volumes over?  Or, is it correct that they
utilize the same LAN network that clients of the cluster are on?  

It would be simple to setup an IP subnet for the SAN and adjust the
cluster configs to use those names/addresses instead.  I'm just
wondering if that's the "correct" way to do this.

Thanks again,
Paul
-- 
Paul Dugas - paul@xxxxxxxx - 404.932.1355

Attachment: signature.asc
Description: This is a digitally signed message part

--
Linux-cluster mailing list
Linux-cluster@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/linux-cluster

[Index of Archives]     [Corosync Cluster Engine]     [GFS]     [Linux Virtualization]     [Centos Virtualization]     [Centos]     [Linux RAID]     [Fedora Users]     [Fedora SELinux]     [Big List of Linux Books]     [Yosemite Camping]

  Powered by Linux