bootabliso for neteusage?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello all,

Ilooks liknetem can do some really interesting WAN emulation.

Has anyonpackaged this in to "recommended / well-known" bootabliso distro with simple web based UI?

I found WANe( http://wanem.sourceforge.net/ ) and wanbridg( http://code.google.com/p/wanbridge/ ) but neither of those seem to have been updated in a while. ?

Plus: WANehas no obvious to way to bused as a transparent bridge, and wanbridge has an un-addressed defect concerning how well it actually does rate limiting.

So, netelooks great, buI'm looking for a nice wrapper. ?I don't mind learning the command syntax, in fact I'd like to, but I want to do testing over a fairly wide range on conditions, and not everyone who can assist me is as comfortable with a linux command line. ?

Thank you

Froshemminger avyatta.com  Fri Sep  7 16:58:03 2012
From: shemminger avyatta.co(Stephen Hemminger)
Date: Fri, 7 Sep 2012 09:58:03 -0700
Subject: bootabliso for neteusage?
In-Reply-To: <20120907135508.77280@xxxxxxx>
References: <20120907135508.77280@xxxxxxx>
Message-ID: <20120907095803.34d7915a@xxxxxxxxxxxxxxxxxxxxxxxxxxx>

OFri, 07 Sep 2012 09:55:08 -0400
"Q. Chap" <quitechap agmx.com> wrote:

> Hello all,
> 
> Ilooks liknetem can do some really interesting WAN emulation.
> 
> Has anyonpackaged this in to "recommended / well-known" bootabliso distro with simple web based UI?
> 
> I found WANe( http://wanem.sourceforge.net/ ) and wanbridg( http://code.google.com/p/wanbridge/ ) but neither of those seem to have been updated in a while. ?
> 
> Plus: WANehas no obvious to way to bused as a transparent bridge, and wanbridge has an un-addressed defect concerning how well it actually does rate limiting.
> 
> So, netelooks great, buI'm looking for a nice wrapper. ?I don't mind learning the command syntax, in fact I'd like to, but I want to do testing over a fairly wide range on conditions, and not everyone who can assist me is as comfortable with a linux command line. ?
> 
> Thank you
> _______________________________________________
> Netemailing list
> Netealists.linux-foundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/netem

No idea, abouother ISO's
I did pusomnetem configuration in Vyatta CLI under 'traffic-policy network-emulator'

Community ISO here: http://vyatta.org/downloads

FroDavid.Groves abskyb.com  Tue Sep 11 10:49:52 2012
From: David.Groves abskyb.co(Groves, David)
Date: Tue, 11 Sep 2012 10:49:52 +0000
Subject: Bridging, adding latency and buffer sizes
Message-ID: <64D28CA6F5BFD44D891175ABC3AEE21F1F4C1F82@xxxxxxxxxxxxxxxxxxx>

I havthfollowing setup (all boxes are Ubuntu 12.04, 3.0.0-17 kernel, with whatever patches Ubuntu add)  :-

HostA <----> Bridg<----> HostB 	(All links ar1Gbit/s).

OthBridge, I'm configuring a bunch of different netem rules to simulate various different network impairments. However, I getting some interesting results with regards to latency. With no impairments, I can easily get 950Mbit/s+ throughput on a large file between A and B (and I'm seeing sane values for WSF and have sensible advertised windows, and the ss output shows the kind of behaviour I would expect with regards to the growth of the cwnd). I also have set "net.ipv4.tcp_no_metrics_save=1" to avoid cached values messing up my "TCP learning" behaviour.

As I increaslatency, I'seeing a linear increase in the time it takes to complete a HTTP download from HostA to HostB (where I'm expecting to see an increase in the time it takes to scale the window, but for large (1Gigabyte say) files, I'm expecting to see the latency make little difference), to the point adding 4ms brings me down to around 55Mbit/s, and 8ms down to around 28Mbit/s.

I strongly suspecthis is becausmy buffers aren't deep enough on the Bridge. I suspect this is the case because looking at packet captures of the session, I'm seeing a lot of DUPACK's and the associated TCP retransmissions, which are indicative of lost TCP segments. Some rough maths suggests, with 4msec (one way) delay, the bridge is going to need to buffer around 370 packets, or around 550k of data to add this 4ms worth of latency, and double this with 8msec.

I hava similar tesrig elsewhere, which is a Linux box hung off a DSL line, I get much more like the behaviour I would expect (I suspect because I'm blowing up buffers on the DSL router long before I put the Linux network buffers under any pressure).

Does anyonhavany pointers on what I should be doing to determine :-
a.) If this is really my problem.
b.) If iis, how I can go around fixing it.

I'somewhaignorant of the Linux networking stack, so it is possible this is blindingly obvious. So far I've tried the stuff below without any success :-

- Increasthtxqueuelen (with ifconfig) on the bridge (for both eth0 and eth1).
- Increasthnet.core.rmem* and net.core.wmem*sysctl variables.


-- 
David Groves
Senior IP Network DevelopmenEngineer
British Sky Broadcasting
david.groves abskyb.co/ 0207 032 7339


Informatioin this email including any attachments may bprivileged, confidential and is intended exclusively for the addressee. The views expressed may not be official policy, but the personal views of the originator. If you have received it in error, please notify the sender by return e-mail and delete it from your system. You should not reproduce, distribute, store, retransmit, use or disclose its contents to anyone. Please note we reserve the right to monitor all e-mail communication through our internal and external networks. SKY and the SKY marks are trade marks of British Sky Broadcasting Group plc and are used under licence. British Sky Broadcasting Limited (Registration No. 2906991), Sky Interactive Limited (Registration No. 3554332), Sky-In-Home Service Limited (Registration No. 2067075) and Sky Subscribers Services Limited (Registration No. 2340150) are direct or indirect subsidiaries of British Sky Broadcasting Group plc (Registration No. 2247735). All of the companies mentioned in this paragraph are incorporated in England and Wales and share the same registered office at Grant Way, Isleworth, Middlesex TW7 5QD.



Froshemminger avyatta.com  Tue Sep 11 15:29:20 2012
From: shemminger avyatta.co(Stephen Hemminger)
Date: Tue, 11 Sep 2012 08:29:20 -0700
Subject: Bridging, adding latency and buffer sizes
In-Reply-To: <64D28CA6F5BFD44D891175ABC3AEE21F1F4C1F82@xxxxxxxxxxxxxxxxxxx>
References: <64D28CA6F5BFD44D891175ABC3AEE21F1F4C1F82@xxxxxxxxxxxxxxxxxxx>
Message-ID: <20120911082920.19313f26@xxxxxxxxxxxxxxxxxxxxxxxxxxx>

OTue, 11 Sep 2012 10:49:52 +0000
"Groves, David" <David.Groves abskyb.com> wrote:

> I havthfollowing setup (all boxes are Ubuntu 12.04, 3.0.0-17 kernel, with whatever patches Ubuntu add)  :-
> 
> HostA <----> Bridg<----> HostB 	(All links ar1Gbit/s).
> 
> OthBridge, I'm configuring a bunch of different netem rules to simulate various different network impairments. However, I getting some interesting results with regards to latency. With no impairments, I can easily get 950Mbit/s+ throughput on a large file between A and B (and I'm seeing sane values for WSF and have sensible advertised windows, and the ss output shows the kind of behaviour I would expect with regards to the growth of the cwnd). I also have set "net.ipv4.tcp_no_metrics_save=1" to avoid cached values messing up my "TCP learning" behaviour.
> 
> As I increaslatency, I'seeing a linear increase in the time it takes to complete a HTTP download from HostA to HostB (where I'm expecting to see an increase in the time it takes to scale the window, but for large (1Gigabyte say) files, I'm expecting to see the latency make little difference), to the point adding 4ms brings me down to around 55Mbit/s, and 8ms down to around 28Mbit/s.
> 
> I strongly suspecthis is becausmy buffers aren't deep enough on the Bridge. I suspect this is the case because looking at packet captures of the session, I'm seeing a lot of DUPACK's and the associated TCP retransmissions, which are indicative of lost TCP segments. Some rough maths suggests, with 4msec (one way) delay, the bridge is going to need to buffer around 370 packets, or around 550k of data to add this 4ms worth of latency, and double this with 8msec.
> 
> I hava similar tesrig elsewhere, which is a Linux box hung off a DSL line, I get much more like the behaviour I would expect (I suspect because I'm blowing up buffers on the DSL router long before I put the Linux network buffers under any pressure).
> 
> Does anyonhavany pointers on what I should be doing to determine :-
> a.) If this is really my problem.
> b.) If iis, how I can go around fixing it.
> 
> I'somewhaignorant of the Linux networking stack, so it is possible this is blindingly obvious. So far I've tried the stuff below without any success :-
> 
> - Increasthtxqueuelen (with ifconfig) on the bridge (for both eth0 and eth1).
> - Increasthnet.core.rmem* and net.core.wmem*sysctl variables.
> 
>

First, thbridgdevice configuration only applies to packets locally sent on the
bridge. For whayou ardoing it is meaningless.

You should busing neteon one or more of the Ethernet devices attached
to thbridge.

Did you increasthqueue size in netem? The netem qdisc needs to buffer packets
based othbandwidth and delay.


[Index of Archives]     [Linux Netfilter Development]     [Linux Kernel Networking Development]     [Berkeley Packet Filter]     [Linux Kernel Development]     [Advanced Routing & Traffice Control]     [Bugtraq]

  Powered by Linux