Hi Muntasir,
Thanks for your comprehensive response. I figured out earlier today that
the key word I was missing is "bonding". If you search for VPN bonding
then a number of solutions surface. (I was previously searching for
"load balancing", etc)
I think I found the answer. The key package is "ifenslave". This allows
you to bond multiple interfaces.
Each interface needs to have a unique IP on both sides, so even if your
server (in the data center) has a single 100M line you still need a
public IP for each ADSL line - as you say, so you can tell Linux to
route each IP address through a different modem.
Then you set up a SSH or VPN bridge for each modem, so you get tun0,
tun1, etc .. Then in your interfaces file you put something like this:
iface bond0 inet static
address 172.26.0.1
netmask 255.255.255.252
bond-slaves tun0 tun1
bond_mode balance-rr
That last line tells it to balance with a round robin, i.e. it sends one
packet for each interface.
This is a good writeup:
http://www.linuxfoundation.org/collaborate/workgroups/networking/bonding
There are a number of inefficiencies in the system:
- There is significant overhead in re-ordering the packets. They say
that in the case of 4 parallel connections, TCP connections will only
get 2.3x the performance. UDP, if the application tolerates packet
ordering issues, can scale almost linear
- The network is balanced by # of packets, not necessarily throughput.
So with mixed traffic the modems may not be utilized evenly
- And of course DSL had no "guaranteed" bandwidth so if one leg falls
behind, a large number of packets from other connections may be
discarded because they are too far out of order.
I'm also concerned how well the system deals with interference. The DSL
has no SLA. If one DSL leg times out the servers may get confused and
not communicate anymore until I come in and reset the connections. The
data center is 6 hours drive from here so I would have to build in back
doors for remote control. It almost would take some special software to
constantly monitor the network and re-negotiate / reconfigure the link
as conditions change.
This seems like something a student could use for his thesis :) but for
me it's probably better to just keep the T1 and be done with it.
On 09/13/2013 07:47 PM, cronolog+lartc wrote:
Hi there,
I was thinking, if I rent some rack space in a server room with a 10
MBit connection, could I have my Linux gateway set up with 6 NIC
cards, 1 LAN and 5xADSL with VPN to my server? The server could feed
data across all 5 lines evenly and possibly meter the data based on
the ping times. That way I would have a lot more bandwidth for the
dollar. 7 MBit down and 3.5MBit up for $275 a month, assuming the rack
space costs $100/mo
However I have been searching around and have not found any answer
that the standard Linux kernel / OpenVPN can do something like this.
Particularly with the ADSL you don't get a fixed bandwidth so you need
to dynamically adjust your throttling based on what you get.
If I were to set this up, I'd first create 5 (OpenVPN) tunnels between
the local linux gateway/server, and the remote linux server. Ensuring
each tunnel is forced over a different ADSL link could be fun. You could
set up 5 local routing tables, each with only one default route, one
table for each ADSL link, then use the port number to select the
appropriate routing table, and therefore ADSL link. My personal
preference of doing this is marking the packets in the iptables mangle
table, and use the "ip rule" command to select the routing table based
on firewall mark. For example (with 2 uplinks):
ip route add default via 192.168.1.1 table 101
ip route add default via 192.168.2.1 table 102
iptables -t mangle -A OUTPUT -d ${remoteServerIP} -p udp --dport
${OpenVPNTunnel1Port} -j MARK --set-mark 201
iptables -t mangle -A OUTPUT -d ${remoteServerIP} -p udp --dport
${OpenVPNTunnel2Port} -j MARK --set-mark 202
ip rule add fwmark 201 pref 11101 table 101
ip rule add fwmark 202 pref 11102 table 102
(I assume there's an ADSL router per link and the public IPs are not on
the local box, otherwise I think it'd get messy handling multiple ppp
client sessions on the same box, especially if you don't have static IPs
on all the links. Well, there's always running each ppp session in a
dedicated virtual machine and set up your virtual networks appropriately
to tie everything together, but let's not complicate things more than
they already are.)
Now that the tunnels are in place, you need to somehow distribute
traffic over them. I'd install quagga on both the local and remote
servers, and run OSPF between them over all 5 tunnels, with the remote
server pushing the default route back to the local one. Hopefully this
should install 5 default routes on the local server each pointing over a
different tunnel, and the kernel should load balance between them now
(you may need to tweak path costs and such in quagga). Also, the same
works the other way - you advertise your local subnet to the remote
server over all 5 tunnels with quagga, and the remote server should load
balance the traffic back to you.
I'm no quagga expert though (only using it to run RIP over OpenVPN at
the moment since I don't need the added benefits OSPF provides), so
would need to read the manual to work out the config to set this up.
But by running a routing protocol, especially OSPF, in this scenario
would mean that if some ADSL links go down and up temporarily, or you
add/remove links permanently, OSPF will take care of modifying the
available routes on either server transparently for you.
I see load balancing routers but they are connection based - i.e. any
one file transfer would still operate at 1.5 MBit, you can just have
multiple at the same time. This causes problems too, in my office we
have load balancing proxy servers and since your IP address changes
all the time many secure websites do not work.
I think even with this set-up, each connection flow would only go over
one tunnel, I don't think linux routes on a per packet basis? Nor would
I want it to in most cases, you'd get more issues with packets arriving
in the wrong order when sending data if you do this, which will hurt tcp
throughput.
However, since you are routing everything via your remote server in this
case, which will almost certainly need to be NATting traffic as it
finally goes out via it's single uplink, you won't have the issue which
you see with your office proxy servers which are basically load
balancing straight onto the Internet via multiple uplinks hence the
multiple IP issue you see.
As such, this may partially negate the fact that you're still limited to
1 ADSL-link worth of bandwidth per flow, since you'll probably have
several flows going on simultaneously, especially for web browsing, so
I'd still expect an overall increase in performance for such types of tasks.
I have a home office so I need to make sure that the ping times stay
low. This is pretty easy with the T1 because of the fixed bandwidth.
Since you're planning on sending all data via a remote server first,
you're already building in latency to your solution and so increasing
ping times. Short of testing and knowing the exact setup, it's hard to
say exactly how much added latency you'll see.
But this is a fairly complex design you're trying to come up with here,
and there are a few details missing to fully configure it, such as how
the ADSL links are presented to the local server, so it would take quite
some fiddling to get it all working. Good luck if you try to go through
with it though, I don't think it's impossible to configure. 5 links does
feel a bit excessive to me, but you know your own requirements.
-- Muntasir
--
To unsubscribe from this list: send the line "unsubscribe lartc" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html