Re: bandwidth aggregation between 2 hosts in the same subnet

Linux Advanced Routing and Traffic Control

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Jay Vosburgh schrieb:
> Grant Taylor <gtaylor@xxxxxxxxxxxxxxxxx> wrote:
> 
> >On 07/31/07 06:01, Ralf Gross wrote:
> >> But I don't have an isolated network. Maybe I'm still too blind to see a
> >> simple solution.

First, thanks for your very detailed reply.
 
[...]
> >The only other nasty thing that comes to mind is to assign additional MAC
> >/ IP sets to each system on their second interfaces.
> 
> 	Another similar Rube Goldberg sort of scheme I've set up in the
> past (in the lab, for bonding testing, not in a production environment,
> your mileage may vary, etc, etc) is to dedicate particular switch ports
> to particular vlans.  So, e.g.,
> 
> linux box eth0 ---- port 1:vlan 99 SWITCH(ES) port2:vlan 99 ---- eth0 linux box
> bond0     eth1 ---- port 3:vlan 88 SWITCH(ES) port4:vlan 88 ---- eth1 bond0

This is someting that I was thinking about too. It would be like a
direct crossover connection which I tested with bonding and that
worked very well in round robin mode.
 
> 	This sort of arrangement requires setting the Cisco switch ports
> to be native to a particular vlan, e.g., "switchport mode access",
> "switchport access vlan 88".  Theoretically, the intervening switches
> will simply pass the vlan traffic through and not decapsulate it until
> it reaches its end destination port.  You might also have to fool with
> the inter-switch links to make sure they're trunking properly (to pass
> the vlan traffic).
> 
> 	The downside of this sort of scheme is that the bond0 instances
> can only communicate with each other, unless you have the ability for
> one of the intermediate switches to route between the vlan and the
> regular network, or you have some other host also attached to the vlans
> to act as a gateway to the rest of the network.  My switches won't
> route, since they're switch-only models (2960/2970/3550), with no layer
> 3 capability, and I've never tried setting up a separate gateway host in
> such a configuration.

That wouldn't be a big problem, I still can take one interface of the
backup server out of the client vlan and add it to the regular backup
vlan (/24). Both hosts are equipped with 4 x GbE interfaces (2 x
client vlan + 2 backup vlan).

> 	This also won't work if the intervening switches either (a)
> don't have higher capacity inter-switch links or (b) don't spread the
> traffic across the ISLs any better than they do on a regular
> etherchannel.
> 
> 	Basically, you want to take the switches out of the equation (so
> the load balance algorithm used by etherchannel doesn't disturb the even
> balance of the round robin transmission).  There might be other ways to
> essentially tunnel from port 1 to 2 and 3 to 4 (in my diagram above),
> but that's really what you're looking to do.

Ok.
 
> [TCP packet reordering]
> 	The bottom line is that you won't ever see N * X bandwidth on a
> single TCP connection, and the improvement factor falls off as the
> number of links in the aggregate increases.  With four links, you're
> doing pretty good to get about 2.3 links worth of throughput.  If memory
> serves, with two links you top out around 1.5.

This is a factor I hope to achieve.
 
> 	So, the real question is: Since you've got two links, how
> important is that 0.5 improvement in transfer speed?  Can you instead
> figure out a way to split your backup problem into pieces, and run them
> concurrently?  

I use bacula for backup, I can add an alias with a different ip/port
for the host with the data. But I think this will get unhandy over the
time.

OT: This should not only be a classical backup, it's a bit like a HSM
solution. We have large amounts of video data that will be moved from
the online storage to tapes. If the data is needed again (only little
will be), it's possible that 5-10 TB of data needs to be restored to
the RAID again. So a 30-50% higher transfer rate could safe some
hours.
 
> 	That can be a much easier problem to tackle, given that it's
> trivial to add extra IP addresses to the hosts on each end, and
> presumably your higher end Cisco gear will permit a load-balance
> algorithm other than straight MAC address XOR.  E.g., the 2960 I've got
> handy permits:
> 
> slime(config)#port-channel load-balance ?
>   dst-ip       Dst IP Addr
>   dst-mac      Dst Mac Addr
>   src-dst-ip   Src XOR Dst IP Addr
>   src-dst-mac  Src XOR Dst Mac Addr
>   src-ip       Src IP Addr
>   src-mac      Src Mac Addr
> 
> 	so it's possible to get the IP address into the port selection
> math, and adding IP addresses is pretty straightforward.

Yes, this is something I thought about first. But I fear that the
backup jobs and database records will get confusing. backups should be
as simple as possible, therefor I'd like to solve this at a lower
level. But it's still an option.

Ralf

_______________________________________________
LARTC mailing list
LARTC@xxxxxxxxxxxxxxx
http://mailman.ds9a.nl/cgi-bin/mailman/listinfo/lartc

[Index of Archives]     [LARTC Home Page]     [Netfilter]     [Netfilter Development]     [Network Development]     [Bugtraq]     [GCC Help]     [Yosemite News]     [Linux Kernel]     [Fedora Users]
  Powered by Linux