Re: Two node NFS cluster serving multiple networks

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



I never use multiple routes. can cause you some grief. Make sure your /etc/hosts, /etc/resolv.conf, /etc/nsswitch.conf files.
I use multiple networks currently and have no problems with the traffic going out the correct paths

B

splist@xxxxxxxxxxxx wrote:
Guess I forgot to edit those IP's :).

I thought you could only have one default gateway on a machine.
I've never needed to deal with multiple nics other than bonded.

PS: What does tab 1/2 mean?

Mike


On Thu, 13 Mar 2008 13:39:25 -0700, Alex Kompel wrote:
  
Google "linux policy based routing".

In your example you just need to setup different gateways for both
interfaces. For example:
ip route add default via 69.2.237.57 dev eth0 tab 1
ip route add default via 192.168.1.1 dev eth1 tab 2


On Thu, Mar 13, 2008 at 9:23 AM, isplist@xxxxxxxxxxxx
<isplist@xxxxxxxxxxxx> wrote:
    
Is there a good document somewhere which explains in not too great
technical
terms how to use multiple nics on a system. I've been running bonded nics
for
many years but getting a machine to use two (or more networks) is still a
mystery to me.

For example, I have a VoIP machine which has two nics which I have
problems
with because I don't understand the above yet.

This machine has a nic allows incoming VoIP/ZIP connections to it's
public IP
address on a T1. The router blocks everything but that traffic.

Then it has a second nic which has a private IP on it to allow for
management
of the machine. Yet recently, it lost it's DNS, it can't seem to get
access to
DNS on it's own. I can force it to use DNS by typing ping commands a
couple of
times but it cannot do it on it's own to get it's updates for example.

Basically, I need the machine to see it's public gateway at xx.x.237.59 to
route it's VoIP/SIP traffic but I also need it to see it's private
gateway at
192.168.1.0 so that it can use DNS and other internal services properly.

route -n
Kernel IP routing table
Destination   Gateway      Genmask            Flags Metric Ref    Use
Iface
xx.x.237.56   0.0.0.0        255.255.255.248 U     0      0        0 eth0
192.168.1.0  0.0.0.0        255.255.255.0    U     0      0        0 eth1
169.254.0.0  0.0.0.0        255.255.0.0        U     0      0        0
eth1
0.0.0.0         69.2.237.57   0.0.0.0             UG    0      0        0
eth0

ifconfig
eth0      Link encap:Ethernet  HWaddr 00:90:27:DC:4B:E6
inet addr:xx.x.237.59  Bcast:69.2.237.63  Mask:255.255.255.248
inet6 addr: fe80::290:27ff:fedc:4be6/64 Scope:Link
UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
RX packets:33910280 errors:16 dropped:0 overruns:0 frame:16
TX packets:45988648 errors:0 dropped:0 overruns:0 carrier:0
collisions:24746 txqueuelen:1000
RX bytes:681966199 (650.3 MiB)  TX bytes:1657358619 (1.5 GiB)

eth1      Link encap:Ethernet  HWaddr 00:13:20:55:D7:CE
inet addr:192.168.1.102  Bcast:192.168.1.255  Mask:255.255.255.0
inet6 addr: fe80::213:20ff:fe55:d7ce/64 Scope:Link
UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
RX packets:87417784 errors:0 dropped:0 overruns:0 frame:0
TX packets:70881957 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:4171601084 (3.8 GiB)  TX bytes:1547562481 (1.4 GiB)

lo        Link encap:Local Loopback
inet addr:127.0.0.1  Mask:255.0.0.0
inet6 addr: ::1/128 Scope:Host
UP LOOPBACK RUNNING  MTU:16436  Metric:1
RX packets:6501004 errors:0 dropped:0 overruns:0 frame:0
TX packets:6501004 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:897257336 (855.6 MiB)  TX bytes:897257336 (855.6 MiB)


Mike


On Wed, 12 Mar 2008 10:39:50 -0700, Alex Kompel wrote:
      
You will still need some way to tell the system through which

interface you want to route outgoing packets for each target.
You can achieve the same with greater ease by splitting the network in
2 subnets and assigning each to a single interface.
It all depends on the problem you are trying to solve. If you want
redundancy - use active-passive bonding, you want throughput - use
active-active bonding (if your switch supports link aggregation), if
you want security and isolation - use separate subnets.

-Alex

2008/3/12 Brian Kroth <bpkroth@xxxxxxxx>:
        
This is a hypothetical, but what if you have two interfaces on the
same
network and want to force one service IP to one interface and the
other
to a different interface?  I think what everyone is wondering is how
much control one has over the service IP placement.

Thanks,
Brian

Finnur Örn Guðmundsson - TM Software <fog@xxxx> 2008-03-12 14:36:


          
Hi,

I see no reason why you could not have 3 diffrent interfaces, each
connected to the networks you are trying to serve the NFS requests
to/from. RG Manager will add the floating interfaces to the
"correct"
interface, that is, if your floating ip is 1.2.3.4 and you have a
interface with the IP address 1.2.3.3 he will add the IP to that
interface.


Bgrds,
Finnur

-----Original Message-----
From: linux-cluster-bounces@xxxxxxxxxx [mailto:linux-cluster-
bounces@xxxxxxxxxx] On Behalf Of gordan@xxxxxxxxxx
Sent: 12. mars 2008 14:10
To: linux clustering
Subject: Re:  Two node NFS cluster serving multiple
networks

Sounds very similar to what I'm trying to achieve (see the other
thread
about binding failover resources to interfaces). I've not seen a
response
yet, so I'm most curious to see if you'll get any.

Gordan

On Wed, 12 Mar 2008, Randy Brown wrote:

            
I am using a two node cluster with Centos 5 with up to date
patches.
We have
three different networks to which I would like to serve nfs mounts
from this
cluster.  Can this even be done?  I have interfaces available for
each
network in each node?

              
--
Linux-cluster mailing list
Linux-cluster@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/linux-cluster

--
Linux-cluster mailing list
Linux-cluster@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/linux-cluster

            
--
Linux-cluster mailing list
Linux-cluster@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/linux-cluster

          
--
Linux-cluster mailing list
Linux-cluster@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/linux-cluster
        
--
Linux-cluster mailing list
Linux-cluster@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/linux-cluster

      
--
Linux-cluster mailing list
Linux-cluster@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/linux-cluster
    




--
Linux-cluster mailing list
Linux-cluster@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/linux-cluster
  

--
Linux-cluster mailing list
Linux-cluster@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/linux-cluster

[Index of Archives]     [Corosync Cluster Engine]     [GFS]     [Linux Virtualization]     [Centos Virtualization]     [Centos]     [Linux RAID]     [Fedora Users]     [Fedora SELinux]     [Big List of Linux Books]     [Yosemite Camping]

  Powered by Linux