Multipath questions

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello everyone, I have a few questions regarding the multipath
features of current kernels. I hope this is the right place to ask, so
here we go.

I've set up a load balancing nat box for about 30 machines, going out
via two 2400/256kbps ADSL lines, PPPoE. I've applied the patches at
http://www.ssi.bg/~ja/. But I cannot "nexthop dev ppp0 nexthop dev
ppp1" because sometimes the devices are still not up. So I made a
script that on boot, makes "nexthop dev eth0" -- packets will go
nowhere, until PPP is up, and in the /etc/ppp/ip-up I made a reference
to a script that, when called, looks for al ppp devices up and makes
"route change default table xxx [nexthop dev pppn]" for every ppp
device up. Works just fine.

first question: Should I add this script to ip-down too? Or will the
DGD patches take care of the situation?

After the connection is established, I always get, (of course) two
different, public addresses, one for each link. But the gateway is
always the same. For some reason this ISP (Telecom Argentina) seems to
route all of its traffic via a single router, 200.3.60.1, for the
whole country (about 100.000 ADSL users for them).

second question: Does having the same gateway affect the behavior of multipath?

The server has been running for a few days now, and it froze several
times already. I don't have direct access to that machine , and can't
see the screen to see if it was a kernel panic. But I saw the syslog
and there was "Badness in dst_release". Looked for it on google and
found this http://www.mail-archive.com/netdev@xxxxxxxxxxxxxxx/msg02579.html,
and some discussion about disabling CONFIG_IP_ROUTE_MULTIPATH_CACHED.

third question: Will load balancing features still work with this
disabled, or this part has nothing to do with that? Should I apply the
patch attached to that message and see what happens?

Also, I saw that the balancing doesnt work too well, I don't know if
the server wasn't under that much traffic (the ip_conntrack table had
less than 300 entries when I checked), or if it's just not working
properly. What I see on MRTG is steady traffic on one interface and
small peaks and then nothing on the other. I read on
http://www.ssi.bg/~ja/nano.txt that routes are cached, hence traffic
going to the same address will be on the same provider. This NAT box I
made will be routing multiplayer games, to different servers, but I
guess that there is the possibility that, sooner or later, all gamers
will be on the same server at once, so everyone will be going out the
same DSL line, while the other will be idle.

fourth question: is there a way to add some kind of "preferred route"
to certain packets? Say, I want all the even machines (192.168.0.2, 4,
6,...) to go through ppp0 and all the odd machiness
(192.168.0.1,3,5,...) to go through ppp1, and if the preferred route
is not available, then go through the only available route. This will
force some more load balancing even if most people are playing in the
same server. (of course it will do nothing if I'm lucky enough that
everyone on even machines is playing on the same server... :D)

As I'm using NAT (-j MASQUERADE), I'm guessing the transition will not
be seamless in the event of a link going down. I doubt the sessions
could be changed to the other IP (and even in that case, the remote
server won't recognize the new IP). So what if I used load balancing
on a router for public addresses? Will that kind of transition be
(almost) transparent to the user when one link goes down?

Thanks in advance,
Hernán Freschi
-
: send the line "unsubscribe linux-net" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html

[Index of Archives]     [Netdev]     [Ethernet Bridging]     [Linux 802.1Q VLAN]     [Linux Wireless]     [Kernel Newbies]     [Security]     [Linux for Hams]     [Netfilter]     [Git]     [Bugtraq]     [Yosemite News and Information]     [MIPS Linux]     [ARM Linux]     [Linux RAID]     [Linux PCI]     [Linux Admin]     [Samba]

  Powered by Linux