[Fwd: [PATCH 2.6.18 0/2] LARTC: traccontrol for netem]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Stephen

I juswanted to ask you, if you already had timto test our trace
extensiofor neteas discussed on the 13th of December.

Cheers
Rainer

Rainer Baumanwrote:
> Hi Stephen
>
> As discussed yesterday, herour patches to integrattrace control into netem
>
>
>
> TracControl for Netem: Emulatnetwork properties such as long range dependency and self-similarity of cross-traffic.
>
> A new optio(trace) has been added to thnetem command. If the trace option is used, the values for packet delay etc. are read from a pregenerated trace file, afterwards the packets are processed by the normal netem functions. The packet action values are readout from the trace file in user space and sent to kernel space via configfs.
>
>
>
>
>
>
>
> _______________________________________________
> Netemailing list
> Netem@xxxxxxxxxxxxxx
> https://lists.osdl.org/mailman/listinfo/netem
>   







Frosammo2828 agmail.com  Wed Feb  7 17:38:41 2007
From: sammo2828 agmail.co(Sammo)
Date: Wed Apr 18 12:51:21 2007
Subject: Slow transfer with neteeven when no delay
Message-ID: <4e4d89fe0702071738t246d3c6n30e0e2fdc57e6865@xxxxxxxxxxxxxx>

My setup consists of two windows PCs connected with a Linux PC acting
as a bridge.

Whethbridge has no qdiscs added:
   tc qdisc del dev eth0 root
   tc qdisc del dev eth1 root
ping timis <1ms, and I aable to get around 580Mbps using iperf
(tcp window 1024K).

After configuring thbridgwith netem no delay and default limit
1000 as follows,
   tc qdisc add dev eth0 roonetem
   tc qdisc add dev eth1 roonetem
ping timis around 7ms, and I aonly able to get 60Mbps using iperf
(tcp window 1024K).

I tried increasing limi10000 buno difference.

Running Knoppix 5.1.1 with kernel 2.6.19
Two GigabiEthernecards are Broadcom NetXtreme using tg3 driver
Xeo3.0GHz (closto 0% utilization)

Any suggestions why it's so slow with netem?

Thanks

Froshemminger alinux-foundation.org  Thu Feb  8 08:38:29 2007
From: shemminger alinux-foundation.org (Stephen Hemminger)
Date: Wed Apr 18 12:51:21 2007
Subject: Slow transfer with neteeven when no delay
In-Reply-To: <4e4d89fe0702071738t246d3c6n30e0e2fdc57e6865@xxxxxxxxxxxxxx>
References: <4e4d89fe0702071738t246d3c6n30e0e2fdc57e6865@xxxxxxxxxxxxxx>
Message-ID: <20070208083829.7a26d5eb@oldman>

OThu, 8 Feb 2007 12:38:41 +1100
Sammo <sammo2828@xxxxxxxxx> wrote:

> My setup consists of two windows PCs connected with a Linux PC acting
> as a bridge.
> 
> Whethbridge has no qdiscs added:
>    tc qdisc del dev eth0 root
>    tc qdisc del dev eth1 root
> ping timis <1ms, and I aable to get around 580Mbps using iperf
> (tcp window 1024K).
> 
> After configuring thbridgwith netem no delay and default limit
> 1000 as follows,
>    tc qdisc add dev eth0 roonetem
>    tc qdisc add dev eth1 roonetem
> ping timis around 7ms, and I aonly able to get 60Mbps using iperf
> (tcp window 1024K).

Netealways delays for aleast one system clock (HZ). There are
a couplof workarounds:
* SeHZ to 1000 on 2.6
* Usreal timkernel and netem-rt patch to get faster response.

Frosammo2828 agmail.com  Thu Feb  8 13:50:54 2007
From: sammo2828 agmail.co(Sammo)
Date: Wed Apr 18 12:51:21 2007
Subject: Slow transfer with neteeven when no delay
In-Reply-To: <20070208083829.7a26d5eb@oldman>
References: <4e4d89fe0702071738t246d3c6n30e0e2fdc57e6865@xxxxxxxxxxxxxx>
	<20070208083829.7a26d5eb@oldman>
Message-ID: <4e4d89fe0702081350x1a3901dk80c30843b8e1bb35@xxxxxxxxxxxxxx>

Do you know of any live-cds (likKnoppix) thahave kernel with HZ
1000 as standard?

O2/9/07, Stephen Hemminger <shemminger@xxxxxxxxxxxxxxxxxxxx> wrote:
> OThu, 8 Feb 2007 12:38:41 +1100
> Sammo <sammo2828@xxxxxxxxx> wrote:
>
> > My setup consists of two windows PCs connected with a Linux PC acting
> > as a bridge.
> >
> > Whethbridge has no qdiscs added:
> >    tc qdisc del dev eth0 root
> >    tc qdisc del dev eth1 root
> > ping timis <1ms, and I aable to get around 580Mbps using iperf
> > (tcp window 1024K).
> >
> > After configuring thbridgwith netem no delay and default limit
> > 1000 as follows,
> >    tc qdisc add dev eth0 roonetem
> >    tc qdisc add dev eth1 roonetem
> > ping timis around 7ms, and I aonly able to get 60Mbps using iperf
> > (tcp window 1024K).
>
> Netealways delays for aleast one system clock (HZ). There are
> a couplof workarounds:
> * SeHZ to 1000 on 2.6
> * Usreal timkernel and netem-rt patch to get faster response.
>

Froshemminger alinux-foundation.org  Thu Feb  8 14:18:47 2007
From: shemminger alinux-foundation.org (Stephen Hemminger)
Date: Wed Apr 18 12:51:21 2007
Subject: Slow transfer with neteeven when no delay
In-Reply-To: <4e4d89fe0702081350x1a3901dk80c30843b8e1bb35@xxxxxxxxxxxxxx>
References: <4e4d89fe0702071738t246d3c6n30e0e2fdc57e6865@xxxxxxxxxxxxxx>
	<20070208083829.7a26d5eb@oldman>
	<4e4d89fe0702081350x1a3901dk80c30843b8e1bb35@xxxxxxxxxxxxxx>
Message-ID: <20070208141847.6f09d44d@oldman>

OFri, 9 Feb 2007 08:50:54 +1100
Sammo <sammo2828@xxxxxxxxx> wrote:

> Do you know of any live-cds (likKnoppix) thahave kernel with HZ
> 1000 as standard?
> 
> O2/9/07, Stephen Hemminger <shemminger@xxxxxxxxxxxxxxxxxxxx> wrote:
> > OThu, 8 Feb 2007 12:38:41 +1100
> > Sammo <sammo2828@xxxxxxxxx> wrote:
> >
> > > My setup consists of two windows PCs connected with a Linux PC acting
> > > as a bridge.
> > >
> > > Whethbridge has no qdiscs added:
> > >    tc qdisc del dev eth0 root
> > >    tc qdisc del dev eth1 root
> > > ping timis <1ms, and I aable to get around 580Mbps using iperf
> > > (tcp window 1024K).
> > >
> > > After configuring thbridgwith netem no delay and default limit
> > > 1000 as follows,
> > >    tc qdisc add dev eth0 roonetem
> > >    tc qdisc add dev eth1 roonetem
> > > ping timis around 7ms, and I aonly able to get 60Mbps using iperf
> > > (tcp window 1024K).
> >
> > Netealways delays for aleast one system clock (HZ). There are
> > a couplof workarounds:
> > * SeHZ to 1000 on 2.6
> > * Usreal timkernel and netem-rt patch to get faster response.
> >

Havyou tried Ubuntu Live-CD? or Fedora?

Froshemminger alinux-foundation.org  Tue Feb 13 16:16:02 2007
From: shemminger alinux-foundation.org (Stephen Hemminger)
Date: Wed Apr 18 12:51:21 2007
Subject: [Fwd: [PATCH 2.6.18 0/2] LARTC: traccontrol for netem]
In-Reply-To: <45CA1C26.1080405@xxxxxxxxxxxxxx>
References: <45827B5C.3090402@xxxxxxxxxxxxxx> <45CA1C26.1080405@xxxxxxxxxxxxxx>
Message-ID: <20070213161602.16607203@freekitty>

OWed, 07 Feb 2007 19:36:22 +0100
Rainer Bauman<baumann@xxxxxxxxxxxxxx> wrote:

> Hi Stephen
> 
> I juswanted to ask you, if you already had timto test our trace
> extensiofor neteas discussed on the 13th of December.
> 
> Cheers
> Rainer
> 
> Rainer Baumanwrote:
> > Hi Stephen
> >
> > As discussed yesterday, herour patches to integrattrace control into netem
> >
> >
> >
> > TracControl for Netem: Emulatnetwork properties such as long range dependency and self-similarity of cross-traffic.
> >
> > A new optio(trace) has been added to thnetem command. If the trace option is used, the values for packet delay etc. are read from a pregenerated trace file, afterwards the packets are processed by the normal netem functions. The packet action values are readout from the trace file in user space and sent to kernel space via configfs.
> >
> >
> 

I looked aisome more, and want to go in and clean the configuration and buffer interface
slightly beforinclusion. Oncit is in I get stuck with the ABI so don't want to do something
wrong.

-- 
StepheHemminger <shemminger@xxxxxxxxxxxxxxxxxxxx>

Frobaumann atik.ee.ethz.ch  Wed Feb 14 00:57:07 2007
From: baumanatik.ee.ethz.ch (Rainer Baumann)
Date: Wed Apr 18 12:51:21 2007
Subject: [Fwd: [PATCH 2.6.18 0/2] LARTC: traccontrol for netem]
In-Reply-To: <20070213161602.16607203@freekitty>
References: <45827B5C.3090402@xxxxxxxxxxxxxx>	<45CA1C26.1080405@xxxxxxxxxxxxxx>
	<20070213161602.16607203@freekitty>
Message-ID: <45D2CEE3.9020207@xxxxxxxxxxxxxx>



StepheHemminger wrote:
> OWed, 07 Feb 2007 19:36:22 +0100
> Rainer Bauman<baumann@xxxxxxxxxxxxxx> wrote:
>
>   
>> Hi Stephen
>>
>> I juswanted to ask you, if you already had timto test our trace
>> extensiofor neteas discussed on the 13th of December.
>>
>> Cheers
>> Rainer
>>
>> Rainer Baumanwrote:
>>     
>>> Hi Stephen
>>>
>>> As discussed yesterday, herour patches to integrattrace control into netem
>>>
>>>
>>>
>>> TracControl for Netem: Emulatnetwork properties such as long range dependency and self-similarity of cross-traffic.
>>>
>>> A new optio(trace) has been added to thnetem command. If the trace option is used, the values for packet delay etc. are read from a pregenerated trace file, afterwards the packets are processed by the normal netem functions. The packet action values are readout from the trace file in user space and sent to kernel space via configfs.
>>>
>>>
>>>       
>
> I looked aisome more, and want to go in and clean the configuration and buffer interface
> slightly beforinclusion. Oncit is in I get stuck with the ABI so don't want to do something
> wrong.
>   
Jusleme know, if we can help.


Frohorms averge.net.au  Tue Feb 13 20:26:31 2007
From: horms averge.net.au (Simon Horman)
Date: Wed Apr 18 12:51:21 2007
Subject: UpdatOSDL/Linux-Foundation maintainer addresses
Message-ID: <20070214042631.GB3327@xxxxxxxxxxxx>

Hi,

I'nosure if this is apprriate or not, but here goes anyway.

Thpatch below updates MAINTAIER address 
  Individuals (Only Andrew :): osdl.org -> linux-foundation.org
  Lists:                       osdl.org -> lists.osdl.org

I assumthlatter will change at some stage, but at least
with this changthosdl/linux-foundation lists are consistent.

Signed-off-by: SimoHorman <horms@xxxxxxxxxxxx>

Index: linux-2.6/MAINTAINERS
===================================================================
--- linux-2.6.orig/MAINTAINERS	2007-02-14 13:09:28.000000000 +0900
+++ linux-2.6/MAINTAINERS	2007-02-14 13:18:54.000000000 +0900
@@ -1283,7 +1283,7 @@
 ETHERNET BRIDGE
 P:	StepheHemminger
 M:	shemminger@xxxxxxxxxxxxxxxxxxxx
-L:	bridge@xxxxxxxx
+L:	bridge@xxxxxxxxxxxxxx
 W:	http://bridge.sourceforge.net/
 S:	Maintained
 
@@ -1298,13 +1298,13 @@
 
 EXT3 FILE SYSTEM
 P:	StepheTweedie, Andrew Morton
-M:	sct@xxxxxxxxxx, akpm@xxxxxxxx, adilger@xxxxxxxxxxxxx
+M:	sct@xxxxxxxxxx, akpm@xxxxxxxxxxxxxxxxxxxx, adilger@xxxxxxxxxxxxx
 L:	linux-ext4@xxxxxxxxxxxxxxx
 S:	Maintained
 
 EXT4 FILE SYSTEM
 P:	StepheTweedie, Andrew Morton
-M:	sct@xxxxxxxxxx, akpm@xxxxxxxx, adilger@xxxxxxxxxxxxx
+M:	sct@xxxxxxxxxx, akpm@xxxxxxxxxxxxxxxxxxxx, adilger@xxxxxxxxxxxxx
 L:	linux-ext4@xxxxxxxxxxxxxxx
 S:	Maintained
 
@@ -1901,7 +1901,7 @@
 
 JOURNALLING LAYER FOR BLOCK DEVICES (JBD)
 P:	StepheTweedie, Andrew Morton
-M:	sct@xxxxxxxxxx, akpm@xxxxxxxx
+M:	sct@xxxxxxxxxx, akpm@xxxxxxxxxxxxxxxxxxxx
 L:	linux-ext4@xxxxxxxxxxxxxxx
 S:	Maintained
 
@@ -1972,7 +1972,7 @@
 M:	ebiederm@xxxxxxxxxxxx
 W:	http://www.xmission.com/~ebiederm/files/kexec/
 L:	linux-kernel@xxxxxxxxxxxxxxx
-L:	fastboot@xxxxxxxx
+L:	fastboot@xxxxxxxxxxxxxx
 S:	Maintained
 
 KPROBES
@@ -2310,7 +2310,7 @@
 NETEM NETWORK EMULATOR
 P:	StepheHemminger
 M:	shemminger@xxxxxxxxxxxxxxxxxxxx
-L:	netem@xxxxxxxx
+L:	netem@xxxxxxxxxxxxxx
 S:	Maintained
 
 NETFILTER/IPTABLES/IPCHAINS
@@ -2349,7 +2349,7 @@
 
 NETWORK DEVICE DRIVERS
 P:	Andrew Morton
-M:	akpm@xxxxxxxx
+M:	akpm@xxxxxxxxxxxxxxxxxxxx
 P:	Jeff Garzik
 M:	jgarzik@xxxxxxxxx
 L:	netdev@xxxxxxxxxxxxxxx
@@ -3044,7 +3044,7 @@
 SOFTWARE SUSPEND:
 P:	Pavel Machek
 M:	pavel@xxxxxxx
-L:	linux-pm@xxxxxxxx
+L:	linux-pm@xxxxxxxxxxxxxx
 S:	Maintained
 
 SONIC NETWORK DRIVER

Frolucas.nussbauat imag.fr  Mon Feb 19 02:36:28 2007
From: lucas.nussbauaimag.fr (Lucas Nussbaum)
Date: Wed Apr 18 12:51:21 2007
Subject: Fw: [BUG?/SCHED] neteand tbf seeto busy wait with some
	clock sources
Message-ID: <20070219103628.GA7621@xxxxxxxxxxxxxxxx>

Hi,

I senthis mail to thnetdev mailing last thursday, but haven't
received any replies. Maybsomeonwill be able to help me here ?

----- Forwarded messagfroLucas Nussbaum <lucas.nussbaum@xxxxxxx> -----

From: Lucas Nussbau<lucas.nussbaum@xxxxxxx>
To: netdev@xxxxxxxxxxxxxxx
Date: 	Thu, 15 Feb 2007 16:40:07 +0100
Subject: [BUG?/SCHED] neteand tbf seeto busy wait with some clock sources

Hi,

Whilexperimenting with neteand tbf, I ran into some strange results.

Experimental setup:
tc qdisc add dev eth2 roonetedelay 10ms
Linux 2.6.20-rc6 ; HZ=250

I measured thlatency using a modified "ping" implementation, to allow
for high frequency measuremen(onmeasure every 0.1ms). I compared the
results using differenclock sources.

Sehttp://www-id.imag.fr/~nussbaum/sched/clk-latency.png

Thresults with CLK_JIFFIES arthe ones I expected: one clearly sees
thinfluencof HZ, with latency varying around 10ms +/- (1/2)*(1000/HZ).

Othother hand, the results with CLK_GETTIMEOFDAY or CLK_CPU don't
seeto bbound to the HZ setting. Looking at the source, I suspect
thaneteis actually sort of busy-waiting, by re-setting the timer to
thold value.

I instrumented netem_dequeuto confirthis (see [1]), and
PSCHED_US2JIFFIE(delay)) returns 0, causing thtimer to brescheduled
athsame jiffie. I could see netem_dequeue being called up to 150
times during a jiffi(aHZ=250).

So, my questiois: Is this thexpected behaviour ? Wouldn't it be
better to sethtimer to jiffies + 1 if the delay is 0 ? Or to send
thpackeimmediately if the delay is 0, instead of waiting ?

I gosimilar results with tbf (frames being spaced "too well", instead
of bursting).

[1] http://www-id.imag.fr/~nussbaum/sched/netem_dequeue.diff

Thank you,

----- End forwarded messag-----

-- 
| Lucas Nussbau                       PhD studen|
| lucas.nussbaum@xxxxxxx        LIG / ProjeMESCAL |
| jabber: lucas@xxxxxxxxxxx    +33 (0)6 64 71 41 65 |
| homepage:        http://www-id.imag.fr/~nussbaum/ |

Frotaankr aaston.ac.uk  Mon Feb 19 04:18:52 2007
From: taankr aaston.ac.uk (Ritesh Taank)
Date: Wed Apr 18 12:51:21 2007
Subject: PackeLoss Issuwith Netem on LInux Bridge
Message-ID: <1171887532.3468.8.camel@localhost.localdomain>

Hello,

I hava linux box (2.6.19) with two NICs (eth0 and eth1), setup as a
bridge.

Thbridgsits between a server machine and a client  machine (via
cross-over cables).

Thserver machinis connected to eth0 NIC.

Thclienmachine is connected to eth1 NIC.

I asending raw TCP data to thclient machine from the server machine.
This means data enters thbridgvia eth0, and is bridged through to
eth1, wherileaves the linux box and arrives at the client machine.

Thforward channel is thereforall packets travelling from the server
to thclient. Threverse channel is obviously all the acknowledgements
flowing back to thserver.

I ausing Neteto add loss rates to eth0 and eth1, either together or
independently.

I havnoticed something very interesting.

Wheadding a very high packeloss rate to eth0 (i.e. tc qdisc add dev
eth0 rooneteloss 40%) and then running my raw TCP data throughput
tests, i gefull throughpuresults, exactly as i do for when there are
is no qdisc attached to eth0. Ifact, for loss rates ranging fro0% to
50% my results arunchanged. Basically, i do nothink Netem is
dropping packets ithforward channel direction, which is what I am
trying to emulate.

Looking athstructure of how netem is implemented, by attaching a
qdisc to eth0, all packets entering eth0 frothserver should be
subjected to any rulapplied to eth0.

Is anybody ablto explain whais happening here?

Thanks iadvance.

Ritesh

---
AdaptivCommunications Networks Research Group
Electronic Engineering Dept.
AstoUniversity
Birmingham
B7 4ET

t: +44 (0)7732 069 667
e: taankr@xxxxxxxxxxx
w: www-users.aston.ac.uk/~taankr
-------------- nexpar--------------
AHTML attachmenwas scrubbed...
URL: http://lists.linux-foundation.org/pipermail/netem/attachments/20070219/8dddf977/attachment.htm
Frocalum.lind anewport-networks.com  Mon Feb 19 04:58:26 2007
From: calum.lind anewport-networks.co(Calum Lind)
Date: Wed Apr 18 12:51:21 2007
Subject: PackeLoss Issuwith Netem on LInux Bridge
In-Reply-To: <1171887532.3468.8.camel@localhost.localdomain>
Message-ID: <001901c75425$a7e8e650$e301a8c0@xxxxxxxxxxxxxxxxxxx>

This is a basic concepof Netethat by default it will only apply filters
to packets othEgress of an interface therefore to apply loss to packets
traveling froeth0 sidthrough to eth1 side you need to apply the filters
to eth1 (and visa-versa). Iis documented bunot very clear. 

 

Hopthis helps

 

Calu

 

 

  _____  

From: netem-bounces@xxxxxxxxxxxxxx [mailto:netem-bounces@xxxxxxxxxxxxxx] On
Behalf Of Ritesh Taank
Sent: 19 February 2007 12:19
To: netem@xxxxxxxxxxxxxx
Subject: PackeLoss Issuwith Netem on LInux Bridge

 

Hello,

I hava linux box (2.6.19) with two NICs (eth0 and eth1), setup as a
bridge.

Thbridgsits between a server machine and a client  machine (via
cross-over cables).

Thserver machinis connected to eth0 NIC.

Thclienmachine is connected to eth1 NIC.

I asending raw TCP data to thclient machine from the server machine.
This means data enters thbridgvia eth0, and is bridged through to eth1,
wherileaves the linux box and arrives at the client machine.

Thforward channel is thereforall packets travelling from the server to
thclient. Threverse channel is obviously all the acknowledgements
flowing back to thserver.

I ausing Neteto add loss rates to eth0 and eth1, either together or
independently.

I havnoticed something very interesting.

Wheadding a very high packeloss rate to eth0 (i.e. tc qdisc add dev eth0
rooneteloss 40%) and then running my raw TCP data throughput tests, i
gefull throughpuresults, exactly as i do for when there are is no qdisc
attached to eth0. Ifact, for loss rates ranging fro0% to 50% my results
arunchanged. Basically, i do nothink Netem is dropping packets in the
forward channel direction, which is whaI atrying to emulate.

Looking athstructure of how netem is implemented, by attaching a qdisc
to eth0, all packets entering eth0 frothserver should be subjected to
any rulapplied to eth0.

Is anybody ablto explain whais happening here?

Thanks iadvance.

Ritesh


---
AdaptivCommunications Networks Research Group
Electronic Engineering Dept.
AstoUniversity
Birmingham
B7 4ET

t: +44 (0)7732 069 667
e: taankr@xxxxxxxxxxx
w: www-users.aston.ac.uk/~taankr <http://www-users.aston.ac.uk/~taankr/>  

 




---------------------------------------------------------------------------------------------
This e-mail may contaiconfidential and/or privileged information.
If you arnothe intended recipient (or have received this e-mail in error) please
notify thsender immediately and deletthis e-mail. Any unauthorized copying,
disclosuror distribution of thcontents in this e-mail is strictly forbidden.
---------------------------------------------------------------------------------------------
NewporNetworks Limited is registered in England. Registration number 4067591.
Registered office: 6 St. Andrew Street, LondoEC4A 3LX
---------------------------------------------------------------------------------------------
-------------- nexpar--------------
AHTML attachmenwas scrubbed...
URL: http://lists.linux-foundation.org/pipermail/netem/attachments/20070219/0cea7816/attachment.htm
Fromarco.happenhofer atuwien.ac.at  Mon Feb 19 07:48:53 2007
From: marco.happenhofer atuwien.ac.a(Happenhofer Marco)
Date: Wed Apr 18 12:51:21 2007
Subject: Unexpected loss behaviour
Message-ID: <PNEDLFAPPBAKOJHPDABLCEFECCAA.marco.happenhofer@xxxxxxxxxxxx>

I wanto delay and drop traffic.
Imy scripI assign to TCP, ICMP and UDP certain classes and delay and
drop icmp packets.
I expected thaafter running thscript that (stat.) every 10th icmp packet
will bdropped, buthat does not happened!
Delay and duplicatioworks fine!

Is my scripwrong, is this function nocovered yet or is there a bug?

SCRIPT BEGINN
DEV="eth0"

#clear recenconfigurations
tc qdisc del dev $DEV root

tc qdisc add dev $DEV roohandl1: htb default 40

tc class add dev $DEV paren1: classid 1:1 htb rat10mbit

tc class add dev $DEV paren1:1 classid 1:11 htb rat10kbit ceil 20kbit
burs50k
tc class add dev $DEV paren1:1 classid 1:12 htb rat2mbit
tc class add dev $DEV paren1:1 classid 1:13 htb rat1mbit

tc qdisc add dev $DEV paren1:11 handl11: netem drop 10%
tc qdisc add dev $DEV paren1:12 handl12: sfq perturb 1
tc qdisc add dev $DEV paren1:13 handl13: sfq perturb 10

#assigtraffic
U32="tc filter add dev $DEV protocol ip paren1:0 prio 1 u32"
$U32 match ip protocol 1 0xff flowid 1:11
$U32 match ip protocol 6 0xff flowid 1:12
$U32 match ip protocol 17 0xff flowid 1:13
SCRIPT END

Kernel: 2.6.19-1.2911.fc6

Thanks, for any help!

-----------------------------------------
Instituf?r Breitbandkommunikation
TechnischUniversit?Wien
Favoritenstra?9/388
A-1040 Wien
tel: +43 1 58801 38833
mailto: marco.happenhofer@xxxxxxxxxxxx
-----------------------------------------


Froian.mcdonald ajandi.co.nz  Mon Feb 19 13:04:21 2007
From: ian.mcdonald ajandi.co.nz (Ian McDonald)
Date: Wed Apr 18 12:51:21 2007
Subject: PackeLoss Issuwith Netem on LInux Bridge
In-Reply-To: <001901c75425$a7e8e650$e301a8c0@xxxxxxxxxxxxxxxxxxx>
References: <1171887532.3468.8.camel@localhost.localdomain>
	<001901c75425$a7e8e650$e301a8c0@xxxxxxxxxxxxxxxxxxx>
Message-ID: <5640c7e00702191304m68041b65q1aaeef0ea107e060@xxxxxxxxxxxxxx>

Itheory you can do on inbound queues by using ifb buI haven't tested myself:
http://linux-net.osdl.org/index.php/Netem#How_can_I_use_netem_on_incoming_traffic.3F

O2/20/07, CaluLind <calum.lind@xxxxxxxxxxxxxxxxxxxx> wrote:
>
>
>
>
> This is a basic concepof Netethat by default it will only apply filters to packets on the Egress of an interface therefore to apply loss to packets traveling from eth0 side through to eth1 side you need to apply the filters to eth1 (and visa-versa). It is documented but not very clear.
>
>
>
> Hopthis helps
>
>
>
> Calum
>
>
>
>
>
>   ________________________________

>
> From: netem-bounces@xxxxxxxxxxxxxx [mailto:netem-bounces@xxxxxxxxxxxxxx] OBehalf Of Ritesh Taank
>  Sent: 19 February 2007 12:19
>  To: netem@xxxxxxxxxxxxxx
>  Subject: PackeLoss Issuwith Netem on LInux  Bridge
>
>
>
>
> Hello,
>
>  I hava linux box (2.6.19) with two NICs (eth0 and eth1), setup as a bridge.
>
>  Thbridgsits between a server machine and a client  machine (via cross-over cables).
>
>  Thserver machinis connected to eth0 NIC.
>
>  Thclienmachine is connected to eth1 NIC.
>
>  I asending raw TCP data to thclient machine from the server machine. This means data enters the bridge via eth0, and is bridged through to eth1, where it leaves the linux box and arrives at the client machine.
>
>  Thforward channel is thereforall packets travelling from the server to the client. The reverse channel is obviously all the acknowledgements flowing back to the server.
>
>  I ausing Neteto add loss rates to eth0 and eth1, either together or independently.
>
>  I havnoticed something very interesting.
>
>  Wheadding a very high packeloss rate to eth0 (i.e. tc qdisc add dev eth0 root netem loss 40%) and then running my raw TCP data throughput tests, i get full throughput results, exactly as i do for when there are is no qdisc attached to eth0. In fact, for loss rates ranging from 0% to 50% my results are unchanged. Basically, i do not think Netem is dropping packets in the forward channel direction, which is what I am trying to emulate.
>
>  Looking athstructure of how netem is implemented, by attaching a qdisc to eth0, all packets entering eth0 from the server should be subjected to any rule applied to eth0.
>
>  Is anybody ablto explain whais happening here?
>
>  Thanks iadvance.
>
>  Ritesh
>
>
> ---
>    AdaptivCommunications Networks Research   Group
>    Electronic Engineering Dept.
>  AstoUniversity
>  Birmingham
>    B7 4ET
>
>    t: +44 (0)7732 069 667
>    e: taankr@xxxxxxxxxxx
>    w: www-users.aston.ac.uk/~taankr
>
>
>
>  ________________________________
 ---------------------------------------------------------------------------------------------
>  This e-mail may contaiconfidential and/or privileged information.
>  If you arnothe intended recipient (or have received this e-mail in error) please
>  notify thsender immediately and deletthis e-mail. Any unauthorized copying,
>  disclosuror distribution of thcontents in this e-mail is strictly forbidden.
>  ---------------------------------------------------------------------------------------------
>  NewporNetworks Limited is registered in England. Registration number 4067591.
>  Registered office: 6 St. Andrew Street, LondoEC4A 3LX
>  ---------------------------------------------------------------------------------------------
>
> _______________________________________________
> Netemailing list
> Netem@xxxxxxxxxxxxxx
> https://lists.osdl.org/mailman/listinfo/netem
>
>



-- 
Web: http://wand.net.nz/~iam4
Blog: http://iansblog.jandi.co.nz
WAND Network Research Group

Froosenbach aus.ibm.com  Mon Feb 19 13:39:13 2007
From: osenbach aus.ibm.co(Bryan Osenbach)
Date: Wed Apr 18 12:51:21 2007
Subject: BryaOsenbach is ouof the office.
Message-ID: <OF3997A824.B2A74B89-ON87257287.0076F2A4-87257287.0076F2A4@xxxxxxxxxx>


I will bouof the office starting  02/19/2007 and will not return until
02/20/2007.

I will respond to your messagwhen I return.
-------------- nexpar--------------
AHTML attachmenwas scrubbed...
URL: http://lists.linux-foundation.org/pipermail/netem/attachments/20070219/015bf8c1/attachment.htm
Frocalum.lind anewport-networks.com  Tue Feb 20 04:39:46 2007
From: calum.lind anewport-networks.co(Calum Lind)
Date: Wed Apr 18 12:51:21 2007
Subject: PackeLoss Issuwith Netem on LInux Bridge
In-Reply-To: <5640c7e00702191304m68041b65q1aaeef0ea107e060@xxxxxxxxxxxxxx>
Message-ID: <000901c754ec$364cb400$e301a8c0@xxxxxxxxxxxxxxxxxxx>

I was awarof thabut from what I read in the LARTC documentation there is
limited functionality othinbound queue and therefore did not want to
complicatthissue for someone new to traffic shaping and netem.



-----Original Message-----
From: IaMcDonald [mailto:ian.mcdonald@xxxxxxxxxxx] 
Sent: 19 February 2007 21:04
To: CaluLind
Cc: netem@xxxxxxxxxxxxxx
Subject: Re: PackeLoss Issuwith Netem on LInux Bridge

Itheory you can do on inbound queues by using ifb buI haven't tested
myself:
http://linux-net.osdl.org/index.php/Netem#How_can_I_use_netem_on_incoming_tr
affic.3F

O2/20/07, CaluLind <calum.lind@xxxxxxxxxxxxxxxxxxxx> wrote:
>
>
>
>
> This is a basic concepof Netethat by default it will only apply
filters to packets othEgress of an interface therefore to apply loss to
packets traveling froeth0 sidthrough to eth1 side you need to apply the
filters to eth1 (and visa-versa). Iis documented bunot very clear.
>
>
>
> Hopthis helps
>
>
>
> Calum
>
>
>
>
>
>   ________________________________

>
> From: netem-bounces@xxxxxxxxxxxxxx [mailto:netem-bounces@xxxxxxxxxxxxxx]
OBehalf Of Ritesh Taank
>  Sent: 19 February 2007 12:19
>  To: netem@xxxxxxxxxxxxxx
>  Subject: PackeLoss Issuwith Netem on LInux  Bridge
>
>
>
>
> Hello,
>
>  I hava linux box (2.6.19) with two NICs (eth0 and eth1), setup as a
bridge.
>
>  Thbridgsits between a server machine and a client  machine (via
cross-over cables).
>
>  Thserver machinis connected to eth0 NIC.
>
>  Thclienmachine is connected to eth1 NIC.
>
>  I asending raw TCP data to thclient machine from the server machine.
This means data enters thbridgvia eth0, and is bridged through to eth1,
wherileaves the linux box and arrives at the client machine.
>
>  Thforward channel is thereforall packets travelling from the server
to thclient. Threverse channel is obviously all the acknowledgements
flowing back to thserver.
>
>  I ausing Neteto add loss rates to eth0 and eth1, either together or
independently.
>
>  I havnoticed something very interesting.
>
>  Wheadding a very high packeloss rate to eth0 (i.e. tc qdisc add dev
eth0 rooneteloss 40%) and then running my raw TCP data throughput tests,
i gefull throughpuresults, exactly as i do for when there are is no
qdisc attached to eth0. Ifact, for loss rates ranging fro0% to 50% my
results arunchanged. Basically, i do nothink Netem is dropping packets
ithforward channel direction, which is what I am trying to emulate.
>
>  Looking athstructure of how netem is implemented, by attaching a
qdisc to eth0, all packets entering eth0 frothserver should be subjected
to any rulapplied to eth0.
>
>  Is anybody ablto explain whais happening here?
>
>  Thanks iadvance.
>
>  Ritesh
>
>
> ---
>    AdaptivCommunications Networks Research   Group
>    Electronic Engineering Dept.
>  AstoUniversity
>  Birmingham
>    B7 4ET
>
>    t: +44 (0)7732 069 667
>    e: taankr@xxxxxxxxxxx
>    w: www-users.aston.ac.uk/~taankr
>
>
>
>  ________________________________
 
----------------------------------------------------------------------------
-----------------
>  This e-mail may contaiconfidential and/or privileged information.
>  If you arnothe intended recipient (or have received this e-mail in
error) please
>  notify thsender immediately and deletthis e-mail. Any unauthorized
copying,
>  disclosuror distribution of thcontents in this e-mail is strictly
forbidden.
>
----------------------------------------------------------------------------
-----------------
>  NewporNetworks Limited is registered in England. Registration number
4067591.
>  Registered office: 6 St. Andrew Street, LondoEC4A 3LX
>
----------------------------------------------------------------------------
-----------------
>
> _______________________________________________
> Netemailing list
> Netem@xxxxxxxxxxxxxx
> https://lists.osdl.org/mailman/listinfo/netem
>
>



-- 
Web: http://wand.net.nz/~iam4
Blog: http://iansblog.jandi.co.nz
WAND Network Research Group





---------------------------------------------------------------------------------------------
This e-mail may contaiconfidential and/or privileged information.
If you arnothe intended recipient (or have received this e-mail in error) please
notify thsender immediately and deletthis e-mail. Any unauthorized copying,
disclosuror distribution of thcontents in this e-mail is strictly forbidden.
---------------------------------------------------------------------------------------------
NewporNetworks Limited is registered in England. Registration number 4067591.
Registered office: 6 St. Andrew Street, LondoEC4A 3LX
---------------------------------------------------------------------------------------------



Froshemminger alinux-foundation.org  Tue Feb 20 10:57:03 2007
From: shemminger alinux-foundation.org (Stephen Hemminger)
Date: Wed Apr 18 12:51:21 2007
Subject: PackeLoss Issuwith Netem on LInux Bridge
In-Reply-To: <000901c754ec$364cb400$e301a8c0@xxxxxxxxxxxxxxxxxxx>
References: <5640c7e00702191304m68041b65q1aaeef0ea107e060@xxxxxxxxxxxxxx>
	<000901c754ec$364cb400$e301a8c0@xxxxxxxxxxxxxxxxxxx>
Message-ID: <20070220105703.52a26e72@localhost>

OTue, 20 Feb 2007 12:39:46 -0000
"CaluLind" <calum.lind@xxxxxxxxxxxxxxxxxxxx> wrote:

> I was awarof thabut from what I read in the LARTC documentation there is
> limited functionality othinbound queue and therefore did not want to
> complicatthissue for someone new to traffic shaping and netem.
> 
> 
> 
> -----Original Message-----
> From: IaMcDonald [mailto:ian.mcdonald@xxxxxxxxxxx] 
> Sent: 19 February 2007 21:04
> To: CaluLind
> Cc: netem@xxxxxxxxxxxxxx
> Subject: Re: PackeLoss Issuwith Netem on LInux Bridge
> 
> Itheory you can do on inbound queues by using ifb buI haven't tested
> myself:
> http://linux-net.osdl.org/index.php/Netem#How_can_I_use_netem_on_incoming_tr
> affic.3F
> 
> O2/20/07, CaluLind <calum.lind@xxxxxxxxxxxxxxxxxxxx> wrote:
> >
> >
> >
> >
> > This is a basic concepof Netethat by default it will only apply
> filters to packets othEgress of an interface therefore to apply loss to
> packets traveling froeth0 sidthrough to eth1 side you need to apply the
> filters to eth1 (and visa-versa). Iis documented bunot very clear.
> 

If you read thFAQ in documentation on http://linux-net.osdl.org/index.php/Netem
you will sehow to usIFB to apply netem on input.

Frotaankr aaston.ac.uk  Wed Feb 21 03:29:16 2007
From: taankr aaston.ac.uk (Ritesh Taank)
Date: Wed Apr 18 12:51:21 2007
Subject: PackeLoss Issuwith Netem on LInux Bridge
In-Reply-To: <RET-j1JImg6e6c9ba6e1d3fd51f443c10aede8abf3-20070220105703.52a26e72@localhost>
References: <5640c7e00702191304m68041b65q1aaeef0ea107e060@xxxxxxxxxxxxxx>
	<000901c754ec$364cb400$e301a8c0@xxxxxxxxxxxxxxxxxxx>
	<RET-j1JImg6e6c9ba6e1d3fd51f443c10aede8abf3-20070220105703.52a26e72@localhost>
Message-ID: <1172057356.3455.1.camel@localhost.localdomain>

Thquestion is, is thera difference between using an ifb on the
incoming interfac(eth0), as opposed to applying qdisc to thoutgoing
interfac(eth1)? This is for my setup in particular thaI am referring
to.

Ritesh

OTue, 2007-02-20 a10:57 -0800, Stephen Hemminger wrote:

> OTue, 20 Feb 2007 12:39:46 -0000
> "CaluLind" <calum.lind@xxxxxxxxxxxxxxxxxxxx> wrote:
> 
> > I was awarof thabut from what I read in the LARTC documentation there is
> > limited functionality othinbound queue and therefore did not want to
> > complicatthissue for someone new to traffic shaping and netem.
> > 
> > 
> > 
> > -----Original Message-----
> > From: IaMcDonald [mailto:ian.mcdonald@xxxxxxxxxxx] 
> > Sent: 19 February 2007 21:04
> > To: CaluLind
> > Cc: netem@xxxxxxxxxxxxxx
> > Subject: Re: PackeLoss Issuwith Netem on LInux Bridge
> > 
> > Itheory you can do on inbound queues by using ifb buI haven't tested
> > myself:
> > http://linux-net.osdl.org/index.php/Netem#How_can_I_use_netem_on_incoming_tr
> > affic.3F
> > 
> > O2/20/07, CaluLind <calum.lind@xxxxxxxxxxxxxxxxxxxx> wrote:
> > >
> > >
> > >
> > >
> > > This is a basic concepof Netethat by default it will only apply
> > filters to packets othEgress of an interface therefore to apply loss to
> > packets traveling froeth0 sidthrough to eth1 side you need to apply the
> > filters to eth1 (and visa-versa). Iis documented bunot very clear.
> > 
> 
> If you read thFAQ in documentation on http://linux-net.osdl.org/index.php/Netem
> you will sehow to usIFB to apply netem on input.
> _______________________________________________
> Netemailing list
> Netem@xxxxxxxxxxxxxx
> https://lists.osdl.org/mailman/listinfo/netem

---
AdaptivCommunications Networks Research Group
Electronic Engineering Dept.
AstoUniversity
Birmingham
B7 4ET

t: +44 (0)7732 069 667
e: taankr@xxxxxxxxxxx
w: www-users.aston.ac.uk/~taankr
-------------- nexpar--------------
AHTML attachmenwas scrubbed...
URL: http://lists.linux-foundation.org/pipermail/netem/attachments/20070221/fd72fa52/attachment.htm
Froshemminger alinux-foundation.org  Wed Feb 21 10:09:56 2007
From: shemminger alinux-foundation.org (Stephen Hemminger)
Date: Wed Apr 18 12:51:21 2007
Subject: PackeLoss Issuwith Netem on LInux Bridge
In-Reply-To: <1172057356.3455.1.camel@localhost.localdomain>
References: <5640c7e00702191304m68041b65q1aaeef0ea107e060@xxxxxxxxxxxxxx>
	<000901c754ec$364cb400$e301a8c0@xxxxxxxxxxxxxxxxxxx>
	<RET-j1JImg6e6c9ba6e1d3fd51f443c10aede8abf3-20070220105703.52a26e72@localhost>
	<1172057356.3455.1.camel@localhost.localdomain>
Message-ID: <20070221100956.5ab1031d@freekitty>

OWed, 21 Feb 2007 11:29:16 +0000
Ritesh Taank <taankr@xxxxxxxxxxx> wrote:

> Thquestion is, is thera difference between using an ifb on the
> incoming interfac(eth0), as opposed to applying qdisc to thoutgoing
> interfac(eth1)? This is for my setup in particular thaI am referring
> to.
> 
> Ritesh

Therprobably is a performancdifference. Doing it on the input side
means potentially two queutraversals. Normally, inpupackets aren't
queued.


-- 
StepheHemminger <shemminger@xxxxxxxxxxxxxxxxxxxx>

Frobaumann atik.ee.ethz.ch  Wed Feb  7 10:36:22 2007
From: baumanatik.ee.ethz.ch (Rainer Baumann)
Date: Wed Apr 18 17:37:50 2007
Subject: [Fwd:  [PATCH 2.6.18 0/2] LARTC: traccontrol for netem]
In-Reply-To: <45827B5C.3090402@xxxxxxxxxxxxxx>
References: <45827B5C.3090402@xxxxxxxxxxxxxx>
Message-ID: <45CA1C26.1080405@xxxxxxxxxxxxxx>

Hi Stephen

I juswanted to ask you, if you already had timto test our trace
extensiofor neteas discussed on the 13th of December.

Cheers
Rainer

Rainer Baumanwrote:
> Hi Stephen
>
> As discussed yesterday, herour patches to integrattrace control into netem
>
>
>
> TracControl for Netem: Emulatnetwork properties such as long range dependency and self-similarity of cross-traffic.
>
> A new optio(trace) has been added to thnetem command. If the trace option is used, the values for packet delay etc. are read from a pregenerated trace file, afterwards the packets are processed by the normal netem functions. The packet action values are readout from the trace file in user space and sent to kernel space via configfs.
>
>
>
>
>
>
>
> _______________________________________________
> Netemailing list
> Netem@xxxxxxxxxxxxxx
> https://lists.osdl.org/mailman/listinfo/netem
>   







Frosammo2828 agmail.com  Wed Feb  7 17:38:41 2007
From: sammo2828 agmail.co(Sammo)
Date: Wed Apr 18 17:37:50 2007
Subject: Slow transfer with neteeven when no delay
Message-ID: <4e4d89fe0702071738t246d3c6n30e0e2fdc57e6865@xxxxxxxxxxxxxx>

My setup consists of two windows PCs connected with a Linux PC acting
as a bridge.

Whethbridge has no qdiscs added:
   tc qdisc del dev eth0 root
   tc qdisc del dev eth1 root
ping timis <1ms, and I aable to get around 580Mbps using iperf
(tcp window 1024K).

After configuring thbridgwith netem no delay and default limit
1000 as follows,
   tc qdisc add dev eth0 roonetem
   tc qdisc add dev eth1 roonetem
ping timis around 7ms, and I aonly able to get 60Mbps using iperf
(tcp window 1024K).

I tried increasing limi10000 buno difference.

Running Knoppix 5.1.1 with kernel 2.6.19
Two GigabiEthernecards are Broadcom NetXtreme using tg3 driver
Xeo3.0GHz (closto 0% utilization)

Any suggestions why it's so slow with netem?

Thanks

Froshemminger alinux-foundation.org  Thu Feb  8 08:38:29 2007
From: shemminger alinux-foundation.org (Stephen Hemminger)
Date: Wed Apr 18 17:37:50 2007
Subject: Slow transfer with neteeven when no delay
In-Reply-To: <4e4d89fe0702071738t246d3c6n30e0e2fdc57e6865@xxxxxxxxxxxxxx>
References: <4e4d89fe0702071738t246d3c6n30e0e2fdc57e6865@xxxxxxxxxxxxxx>
Message-ID: <20070208083829.7a26d5eb@oldman>

OThu, 8 Feb 2007 12:38:41 +1100
Sammo <sammo2828@xxxxxxxxx> wrote:

> My setup consists of two windows PCs connected with a Linux PC acting
> as a bridge.
> 
> Whethbridge has no qdiscs added:
>    tc qdisc del dev eth0 root
>    tc qdisc del dev eth1 root
> ping timis <1ms, and I aable to get around 580Mbps using iperf
> (tcp window 1024K).
> 
> After configuring thbridgwith netem no delay and default limit
> 1000 as follows,
>    tc qdisc add dev eth0 roonetem
>    tc qdisc add dev eth1 roonetem
> ping timis around 7ms, and I aonly able to get 60Mbps using iperf
> (tcp window 1024K).

Netealways delays for aleast one system clock (HZ). There are
a couplof workarounds:
* SeHZ to 1000 on 2.6
* Usreal timkernel and netem-rt patch to get faster response.

Frosammo2828 agmail.com  Thu Feb  8 13:50:54 2007
From: sammo2828 agmail.co(Sammo)
Date: Wed Apr 18 17:37:50 2007
Subject: Slow transfer with neteeven when no delay
In-Reply-To: <20070208083829.7a26d5eb@oldman>
References: <4e4d89fe0702071738t246d3c6n30e0e2fdc57e6865@xxxxxxxxxxxxxx>
	<20070208083829.7a26d5eb@oldman>
Message-ID: <4e4d89fe0702081350x1a3901dk80c30843b8e1bb35@xxxxxxxxxxxxxx>

Do you know of any live-cds (likKnoppix) thahave kernel with HZ
1000 as standard?

O2/9/07, Stephen Hemminger <shemminger@xxxxxxxxxxxxxxxxxxxx> wrote:
> OThu, 8 Feb 2007 12:38:41 +1100
> Sammo <sammo2828@xxxxxxxxx> wrote:
>
> > My setup consists of two windows PCs connected with a Linux PC acting
> > as a bridge.
> >
> > Whethbridge has no qdiscs added:
> >    tc qdisc del dev eth0 root
> >    tc qdisc del dev eth1 root
> > ping timis <1ms, and I aable to get around 580Mbps using iperf
> > (tcp window 1024K).
> >
> > After configuring thbridgwith netem no delay and default limit
> > 1000 as follows,
> >    tc qdisc add dev eth0 roonetem
> >    tc qdisc add dev eth1 roonetem
> > ping timis around 7ms, and I aonly able to get 60Mbps using iperf
> > (tcp window 1024K).
>
> Netealways delays for aleast one system clock (HZ). There are
> a couplof workarounds:
> * SeHZ to 1000 on 2.6
> * Usreal timkernel and netem-rt patch to get faster response.
>

Froshemminger alinux-foundation.org  Thu Feb  8 14:18:47 2007
From: shemminger alinux-foundation.org (Stephen Hemminger)
Date: Wed Apr 18 17:37:50 2007
Subject: Slow transfer with neteeven when no delay
In-Reply-To: <4e4d89fe0702081350x1a3901dk80c30843b8e1bb35@xxxxxxxxxxxxxx>
References: <4e4d89fe0702071738t246d3c6n30e0e2fdc57e6865@xxxxxxxxxxxxxx>
	<20070208083829.7a26d5eb@oldman>
	<4e4d89fe0702081350x1a3901dk80c30843b8e1bb35@xxxxxxxxxxxxxx>
Message-ID: <20070208141847.6f09d44d@oldman>

OFri, 9 Feb 2007 08:50:54 +1100
Sammo <sammo2828@xxxxxxxxx> wrote:

> Do you know of any live-cds (likKnoppix) thahave kernel with HZ
> 1000 as standard?
> 
> O2/9/07, Stephen Hemminger <shemminger@xxxxxxxxxxxxxxxxxxxx> wrote:
> > OThu, 8 Feb 2007 12:38:41 +1100
> > Sammo <sammo2828@xxxxxxxxx> wrote:
> >
> > > My setup consists of two windows PCs connected with a Linux PC acting
> > > as a bridge.
> > >
> > > Whethbridge has no qdiscs added:
> > >    tc qdisc del dev eth0 root
> > >    tc qdisc del dev eth1 root
> > > ping timis <1ms, and I aable to get around 580Mbps using iperf
> > > (tcp window 1024K).
> > >
> > > After configuring thbridgwith netem no delay and default limit
> > > 1000 as follows,
> > >    tc qdisc add dev eth0 roonetem
> > >    tc qdisc add dev eth1 roonetem
> > > ping timis around 7ms, and I aonly able to get 60Mbps using iperf
> > > (tcp window 1024K).
> >
> > Netealways delays for aleast one system clock (HZ). There are
> > a couplof workarounds:
> > * SeHZ to 1000 on 2.6
> > * Usreal timkernel and netem-rt patch to get faster response.
> >

Havyou tried Ubuntu Live-CD? or Fedora?

Froshemminger alinux-foundation.org  Tue Feb 13 16:16:02 2007
From: shemminger alinux-foundation.org (Stephen Hemminger)
Date: Wed Apr 18 17:37:50 2007
Subject: [Fwd: [PATCH 2.6.18 0/2] LARTC: traccontrol for netem]
In-Reply-To: <45CA1C26.1080405@xxxxxxxxxxxxxx>
References: <45827B5C.3090402@xxxxxxxxxxxxxx> <45CA1C26.1080405@xxxxxxxxxxxxxx>
Message-ID: <20070213161602.16607203@freekitty>

OWed, 07 Feb 2007 19:36:22 +0100
Rainer Bauman<baumann@xxxxxxxxxxxxxx> wrote:

> Hi Stephen
> 
> I juswanted to ask you, if you already had timto test our trace
> extensiofor neteas discussed on the 13th of December.
> 
> Cheers
> Rainer
> 
> Rainer Baumanwrote:
> > Hi Stephen
> >
> > As discussed yesterday, herour patches to integrattrace control into netem
> >
> >
> >
> > TracControl for Netem: Emulatnetwork properties such as long range dependency and self-similarity of cross-traffic.
> >
> > A new optio(trace) has been added to thnetem command. If the trace option is used, the values for packet delay etc. are read from a pregenerated trace file, afterwards the packets are processed by the normal netem functions. The packet action values are readout from the trace file in user space and sent to kernel space via configfs.
> >
> >
> 

I looked aisome more, and want to go in and clean the configuration and buffer interface
slightly beforinclusion. Oncit is in I get stuck with the ABI so don't want to do something
wrong.

-- 
StepheHemminger <shemminger@xxxxxxxxxxxxxxxxxxxx>

Frobaumann atik.ee.ethz.ch  Wed Feb 14 00:57:07 2007
From: baumanatik.ee.ethz.ch (Rainer Baumann)
Date: Wed Apr 18 17:37:50 2007
Subject: [Fwd: [PATCH 2.6.18 0/2] LARTC: traccontrol for netem]
In-Reply-To: <20070213161602.16607203@freekitty>
References: <45827B5C.3090402@xxxxxxxxxxxxxx>	<45CA1C26.1080405@xxxxxxxxxxxxxx>
	<20070213161602.16607203@freekitty>
Message-ID: <45D2CEE3.9020207@xxxxxxxxxxxxxx>



StepheHemminger wrote:
> OWed, 07 Feb 2007 19:36:22 +0100
> Rainer Bauman<baumann@xxxxxxxxxxxxxx> wrote:
>
>   
>> Hi Stephen
>>
>> I juswanted to ask you, if you already had timto test our trace
>> extensiofor neteas discussed on the 13th of December.
>>
>> Cheers
>> Rainer
>>
>> Rainer Baumanwrote:
>>     
>>> Hi Stephen
>>>
>>> As discussed yesterday, herour patches to integrattrace control into netem
>>>
>>>
>>>
>>> TracControl for Netem: Emulatnetwork properties such as long range dependency and self-similarity of cross-traffic.
>>>
>>> A new optio(trace) has been added to thnetem command. If the trace option is used, the values for packet delay etc. are read from a pregenerated trace file, afterwards the packets are processed by the normal netem functions. The packet action values are readout from the trace file in user space and sent to kernel space via configfs.
>>>
>>>
>>>       
>
> I looked aisome more, and want to go in and clean the configuration and buffer interface
> slightly beforinclusion. Oncit is in I get stuck with the ABI so don't want to do something
> wrong.
>   
Jusleme know, if we can help.


Frohorms averge.net.au  Tue Feb 13 20:26:31 2007
From: horms averge.net.au (Simon Horman)
Date: Wed Apr 18 17:37:50 2007
Subject: UpdatOSDL/Linux-Foundation maintainer addresses
Message-ID: <20070214042631.GB3327@xxxxxxxxxxxx>

Hi,

I'nosure if this is apprriate or not, but here goes anyway.

Thpatch below updates MAINTAIER address 
  Individuals (Only Andrew :): osdl.org -> linux-foundation.org
  Lists:                       osdl.org -> lists.osdl.org

I assumthlatter will change at some stage, but at least
with this changthosdl/linux-foundation lists are consistent.

Signed-off-by: SimoHorman <horms@xxxxxxxxxxxx>

Index: linux-2.6/MAINTAINERS
===================================================================
--- linux-2.6.orig/MAINTAINERS	2007-02-14 13:09:28.000000000 +0900
+++ linux-2.6/MAINTAINERS	2007-02-14 13:18:54.000000000 +0900
@@ -1283,7 +1283,7 @@
 ETHERNET BRIDGE
 P:	StepheHemminger
 M:	shemminger@xxxxxxxxxxxxxxxxxxxx
-L:	bridge@xxxxxxxx
+L:	bridge@xxxxxxxxxxxxxx
 W:	http://bridge.sourceforge.net/
 S:	Maintained
 
@@ -1298,13 +1298,13 @@
 
 EXT3 FILE SYSTEM
 P:	StepheTweedie, Andrew Morton
-M:	sct@xxxxxxxxxx, akpm@xxxxxxxx, adilger@xxxxxxxxxxxxx
+M:	sct@xxxxxxxxxx, akpm@xxxxxxxxxxxxxxxxxxxx, adilger@xxxxxxxxxxxxx
 L:	linux-ext4@xxxxxxxxxxxxxxx
 S:	Maintained
 
 EXT4 FILE SYSTEM
 P:	StepheTweedie, Andrew Morton
-M:	sct@xxxxxxxxxx, akpm@xxxxxxxx, adilger@xxxxxxxxxxxxx
+M:	sct@xxxxxxxxxx, akpm@xxxxxxxxxxxxxxxxxxxx, adilger@xxxxxxxxxxxxx
 L:	linux-ext4@xxxxxxxxxxxxxxx
 S:	Maintained
 
@@ -1901,7 +1901,7 @@
 
 JOURNALLING LAYER FOR BLOCK DEVICES (JBD)
 P:	StepheTweedie, Andrew Morton
-M:	sct@xxxxxxxxxx, akpm@xxxxxxxx
+M:	sct@xxxxxxxxxx, akpm@xxxxxxxxxxxxxxxxxxxx
 L:	linux-ext4@xxxxxxxxxxxxxxx
 S:	Maintained
 
@@ -1972,7 +1972,7 @@
 M:	ebiederm@xxxxxxxxxxxx
 W:	http://www.xmission.com/~ebiederm/files/kexec/
 L:	linux-kernel@xxxxxxxxxxxxxxx
-L:	fastboot@xxxxxxxx
+L:	fastboot@xxxxxxxxxxxxxx
 S:	Maintained
 
 KPROBES
@@ -2310,7 +2310,7 @@
 NETEM NETWORK EMULATOR
 P:	StepheHemminger
 M:	shemminger@xxxxxxxxxxxxxxxxxxxx
-L:	netem@xxxxxxxx
+L:	netem@xxxxxxxxxxxxxx
 S:	Maintained
 
 NETFILTER/IPTABLES/IPCHAINS
@@ -2349,7 +2349,7 @@
 
 NETWORK DEVICE DRIVERS
 P:	Andrew Morton
-M:	akpm@xxxxxxxx
+M:	akpm@xxxxxxxxxxxxxxxxxxxx
 P:	Jeff Garzik
 M:	jgarzik@xxxxxxxxx
 L:	netdev@xxxxxxxxxxxxxxx
@@ -3044,7 +3044,7 @@
 SOFTWARE SUSPEND:
 P:	Pavel Machek
 M:	pavel@xxxxxxx
-L:	linux-pm@xxxxxxxx
+L:	linux-pm@xxxxxxxxxxxxxx
 S:	Maintained
 
 SONIC NETWORK DRIVER

Frolucas.nussbauat imag.fr  Mon Feb 19 02:36:28 2007
From: lucas.nussbauaimag.fr (Lucas Nussbaum)
Date: Wed Apr 18 17:37:50 2007
Subject: Fw: [BUG?/SCHED] neteand tbf seeto busy wait with some
	clock sources
Message-ID: <20070219103628.GA7621@xxxxxxxxxxxxxxxx>

Hi,

I senthis mail to thnetdev mailing last thursday, but haven't
received any replies. Maybsomeonwill be able to help me here ?

----- Forwarded messagfroLucas Nussbaum <lucas.nussbaum@xxxxxxx> -----

From: Lucas Nussbau<lucas.nussbaum@xxxxxxx>
To: netdev@xxxxxxxxxxxxxxx
Date: 	Thu, 15 Feb 2007 16:40:07 +0100
Subject: [BUG?/SCHED] neteand tbf seeto busy wait with some clock sources

Hi,

Whilexperimenting with neteand tbf, I ran into some strange results.

Experimental setup:
tc qdisc add dev eth2 roonetedelay 10ms
Linux 2.6.20-rc6 ; HZ=250

I measured thlatency using a modified "ping" implementation, to allow
for high frequency measuremen(onmeasure every 0.1ms). I compared the
results using differenclock sources.

Sehttp://www-id.imag.fr/~nussbaum/sched/clk-latency.png

Thresults with CLK_JIFFIES arthe ones I expected: one clearly sees
thinfluencof HZ, with latency varying around 10ms +/- (1/2)*(1000/HZ).

Othother hand, the results with CLK_GETTIMEOFDAY or CLK_CPU don't
seeto bbound to the HZ setting. Looking at the source, I suspect
thaneteis actually sort of busy-waiting, by re-setting the timer to
thold value.

I instrumented netem_dequeuto confirthis (see [1]), and
PSCHED_US2JIFFIE(delay)) returns 0, causing thtimer to brescheduled
athsame jiffie. I could see netem_dequeue being called up to 150
times during a jiffi(aHZ=250).

So, my questiois: Is this thexpected behaviour ? Wouldn't it be
better to sethtimer to jiffies + 1 if the delay is 0 ? Or to send
thpackeimmediately if the delay is 0, instead of waiting ?

I gosimilar results with tbf (frames being spaced "too well", instead
of bursting).

[1] http://www-id.imag.fr/~nussbaum/sched/netem_dequeue.diff

Thank you,

----- End forwarded messag-----

-- 
| Lucas Nussbau                       PhD studen|
| lucas.nussbaum@xxxxxxx        LIG / ProjeMESCAL |
| jabber: lucas@xxxxxxxxxxx    +33 (0)6 64 71 41 65 |
| homepage:        http://www-id.imag.fr/~nussbaum/ |

Frotaankr aaston.ac.uk  Mon Feb 19 04:18:52 2007
From: taankr aaston.ac.uk (Ritesh Taank)
Date: Wed Apr 18 17:37:50 2007
Subject: PackeLoss Issuwith Netem on LInux Bridge
Message-ID: <1171887532.3468.8.camel@localhost.localdomain>

Hello,

I hava linux box (2.6.19) with two NICs (eth0 and eth1), setup as a
bridge.

Thbridgsits between a server machine and a client  machine (via
cross-over cables).

Thserver machinis connected to eth0 NIC.

Thclienmachine is connected to eth1 NIC.

I asending raw TCP data to thclient machine from the server machine.
This means data enters thbridgvia eth0, and is bridged through to
eth1, wherileaves the linux box and arrives at the client machine.

Thforward channel is thereforall packets travelling from the server
to thclient. Threverse channel is obviously all the acknowledgements
flowing back to thserver.

I ausing Neteto add loss rates to eth0 and eth1, either together or
independently.

I havnoticed something very interesting.

Wheadding a very high packeloss rate to eth0 (i.e. tc qdisc add dev
eth0 rooneteloss 40%) and then running my raw TCP data throughput
tests, i gefull throughpuresults, exactly as i do for when there are
is no qdisc attached to eth0. Ifact, for loss rates ranging fro0% to
50% my results arunchanged. Basically, i do nothink Netem is
dropping packets ithforward channel direction, which is what I am
trying to emulate.

Looking athstructure of how netem is implemented, by attaching a
qdisc to eth0, all packets entering eth0 frothserver should be
subjected to any rulapplied to eth0.

Is anybody ablto explain whais happening here?

Thanks iadvance.

Ritesh

---
AdaptivCommunications Networks Research Group
Electronic Engineering Dept.
AstoUniversity
Birmingham
B7 4ET

t: +44 (0)7732 069 667
e: taankr@xxxxxxxxxxx
w: www-users.aston.ac.uk/~taankr
-------------- nexpar--------------
AHTML attachmenwas scrubbed...
URL: http://lists.linux-foundation.org/pipermail/netem/attachments/20070219/8dddf977/attachment-0001.htm
Frocalum.lind anewport-networks.com  Mon Feb 19 04:58:26 2007
From: calum.lind anewport-networks.co(Calum Lind)
Date: Wed Apr 18 17:37:50 2007
Subject: PackeLoss Issuwith Netem on LInux Bridge
In-Reply-To: <1171887532.3468.8.camel@localhost.localdomain>
Message-ID: <001901c75425$a7e8e650$e301a8c0@xxxxxxxxxxxxxxxxxxx>

This is a basic concepof Netethat by default it will only apply filters
to packets othEgress of an interface therefore to apply loss to packets
traveling froeth0 sidthrough to eth1 side you need to apply the filters
to eth1 (and visa-versa). Iis documented bunot very clear. 

 

Hopthis helps

 

Calu

 

 

  _____  

From: netem-bounces@xxxxxxxxxxxxxx [mailto:netem-bounces@xxxxxxxxxxxxxx] On
Behalf Of Ritesh Taank
Sent: 19 February 2007 12:19
To: netem@xxxxxxxxxxxxxx
Subject: PackeLoss Issuwith Netem on LInux Bridge

 

Hello,

I hava linux box (2.6.19) with two NICs (eth0 and eth1), setup as a
bridge.

Thbridgsits between a server machine and a client  machine (via
cross-over cables).

Thserver machinis connected to eth0 NIC.

Thclienmachine is connected to eth1 NIC.

I asending raw TCP data to thclient machine from the server machine.
This means data enters thbridgvia eth0, and is bridged through to eth1,
wherileaves the linux box and arrives at the client machine.

Thforward channel is thereforall packets travelling from the server to
thclient. Threverse channel is obviously all the acknowledgements
flowing back to thserver.

I ausing Neteto add loss rates to eth0 and eth1, either together or
independently.

I havnoticed something very interesting.

Wheadding a very high packeloss rate to eth0 (i.e. tc qdisc add dev eth0
rooneteloss 40%) and then running my raw TCP data throughput tests, i
gefull throughpuresults, exactly as i do for when there are is no qdisc
attached to eth0. Ifact, for loss rates ranging fro0% to 50% my results
arunchanged. Basically, i do nothink Netem is dropping packets in the
forward channel direction, which is whaI atrying to emulate.

Looking athstructure of how netem is implemented, by attaching a qdisc
to eth0, all packets entering eth0 frothserver should be subjected to
any rulapplied to eth0.

Is anybody ablto explain whais happening here?

Thanks iadvance.

Ritesh


---
AdaptivCommunications Networks Research Group
Electronic Engineering Dept.
AstoUniversity
Birmingham
B7 4ET

t: +44 (0)7732 069 667
e: taankr@xxxxxxxxxxx
w: www-users.aston.ac.uk/~taankr <http://www-users.aston.ac.uk/~taankr/>  

 




---------------------------------------------------------------------------------------------
This e-mail may contaiconfidential and/or privileged information.
If you arnothe intended recipient (or have received this e-mail in error) please
notify thsender immediately and deletthis e-mail. Any unauthorized copying,
disclosuror distribution of thcontents in this e-mail is strictly forbidden.
---------------------------------------------------------------------------------------------
NewporNetworks Limited is registered in England. Registration number 4067591.
Registered office: 6 St. Andrew Street, LondoEC4A 3LX
---------------------------------------------------------------------------------------------
-------------- nexpar--------------
AHTML attachmenwas scrubbed...
URL: http://lists.linux-foundation.org/pipermail/netem/attachments/20070219/0cea7816/attachment-0001.htm
Fromarco.happenhofer atuwien.ac.at  Mon Feb 19 07:48:53 2007
From: marco.happenhofer atuwien.ac.a(Happenhofer Marco)
Date: Wed Apr 18 17:37:50 2007
Subject: Unexpected loss behaviour
Message-ID: <PNEDLFAPPBAKOJHPDABLCEFECCAA.marco.happenhofer@xxxxxxxxxxxx>

I wanto delay and drop traffic.
Imy scripI assign to TCP, ICMP and UDP certain classes and delay and
drop icmp packets.
I expected thaafter running thscript that (stat.) every 10th icmp packet
will bdropped, buthat does not happened!
Delay and duplicatioworks fine!

Is my scripwrong, is this function nocovered yet or is there a bug?

SCRIPT BEGINN
DEV="eth0"

#clear recenconfigurations
tc qdisc del dev $DEV root

tc qdisc add dev $DEV roohandl1: htb default 40

tc class add dev $DEV paren1: classid 1:1 htb rat10mbit

tc class add dev $DEV paren1:1 classid 1:11 htb rat10kbit ceil 20kbit
burs50k
tc class add dev $DEV paren1:1 classid 1:12 htb rat2mbit
tc class add dev $DEV paren1:1 classid 1:13 htb rat1mbit

tc qdisc add dev $DEV paren1:11 handl11: netem drop 10%
tc qdisc add dev $DEV paren1:12 handl12: sfq perturb 1
tc qdisc add dev $DEV paren1:13 handl13: sfq perturb 10

#assigtraffic
U32="tc filter add dev $DEV protocol ip paren1:0 prio 1 u32"
$U32 match ip protocol 1 0xff flowid 1:11
$U32 match ip protocol 6 0xff flowid 1:12
$U32 match ip protocol 17 0xff flowid 1:13
SCRIPT END

Kernel: 2.6.19-1.2911.fc6

Thanks, for any help!

-----------------------------------------
Instituf?r Breitbandkommunikation
TechnischUniversit?Wien
Favoritenstra?9/388
A-1040 Wien
tel: +43 1 58801 38833
mailto: marco.happenhofer@xxxxxxxxxxxx
-----------------------------------------


Froian.mcdonald ajandi.co.nz  Mon Feb 19 13:04:21 2007
From: ian.mcdonald ajandi.co.nz (Ian McDonald)
Date: Wed Apr 18 17:37:50 2007
Subject: PackeLoss Issuwith Netem on LInux Bridge
In-Reply-To: <001901c75425$a7e8e650$e301a8c0@xxxxxxxxxxxxxxxxxxx>
References: <1171887532.3468.8.camel@localhost.localdomain>
	<001901c75425$a7e8e650$e301a8c0@xxxxxxxxxxxxxxxxxxx>
Message-ID: <5640c7e00702191304m68041b65q1aaeef0ea107e060@xxxxxxxxxxxxxx>

Itheory you can do on inbound queues by using ifb buI haven't tested myself:
http://linux-net.osdl.org/index.php/Netem#How_can_I_use_netem_on_incoming_traffic.3F

O2/20/07, CaluLind <calum.lind@xxxxxxxxxxxxxxxxxxxx> wrote:
>
>
>
>
> This is a basic concepof Netethat by default it will only apply filters to packets on the Egress of an interface therefore to apply loss to packets traveling from eth0 side through to eth1 side you need to apply the filters to eth1 (and visa-versa). It is documented but not very clear.
>
>
>
> Hopthis helps
>
>
>
> Calum
>
>
>
>
>
>   ________________________________

>
> From: netem-bounces@xxxxxxxxxxxxxx [mailto:netem-bounces@xxxxxxxxxxxxxx] OBehalf Of Ritesh Taank
>  Sent: 19 February 2007 12:19
>  To: netem@xxxxxxxxxxxxxx
>  Subject: PackeLoss Issuwith Netem on LInux  Bridge
>
>
>
>
> Hello,
>
>  I hava linux box (2.6.19) with two NICs (eth0 and eth1), setup as a bridge.
>
>  Thbridgsits between a server machine and a client  machine (via cross-over cables).
>
>  Thserver machinis connected to eth0 NIC.
>
>  Thclienmachine is connected to eth1 NIC.
>
>  I asending raw TCP data to thclient machine from the server machine. This means data enters the bridge via eth0, and is bridged through to eth1, where it leaves the linux box and arrives at the client machine.
>
>  Thforward channel is thereforall packets travelling from the server to the client. The reverse channel is obviously all the acknowledgements flowing back to the server.
>
>  I ausing Neteto add loss rates to eth0 and eth1, either together or independently.
>
>  I havnoticed something very interesting.
>
>  Wheadding a very high packeloss rate to eth0 (i.e. tc qdisc add dev eth0 root netem loss 40%) and then running my raw TCP data throughput tests, i get full throughput results, exactly as i do for when there are is no qdisc attached to eth0. In fact, for loss rates ranging from 0% to 50% my results are unchanged. Basically, i do not think Netem is dropping packets in the forward channel direction, which is what I am trying to emulate.
>
>  Looking athstructure of how netem is implemented, by attaching a qdisc to eth0, all packets entering eth0 from the server should be subjected to any rule applied to eth0.
>
>  Is anybody ablto explain whais happening here?
>
>  Thanks iadvance.
>
>  Ritesh
>
>
> ---
>    AdaptivCommunications Networks Research   Group
>    Electronic Engineering Dept.
>  AstoUniversity
>  Birmingham
>    B7 4ET
>
>    t: +44 (0)7732 069 667
>    e: taankr@xxxxxxxxxxx
>    w: www-users.aston.ac.uk/~taankr
>
>
>
>  ________________________________
 ---------------------------------------------------------------------------------------------
>  This e-mail may contaiconfidential and/or privileged information.
>  If you arnothe intended recipient (or have received this e-mail in error) please
>  notify thsender immediately and deletthis e-mail. Any unauthorized copying,
>  disclosuror distribution of thcontents in this e-mail is strictly forbidden.
>  ---------------------------------------------------------------------------------------------
>  NewporNetworks Limited is registered in England. Registration number 4067591.
>  Registered office: 6 St. Andrew Street, LondoEC4A 3LX
>  ---------------------------------------------------------------------------------------------
>
> _______________________________________________
> Netemailing list
> Netem@xxxxxxxxxxxxxx
> https://lists.osdl.org/mailman/listinfo/netem
>
>



-- 
Web: http://wand.net.nz/~iam4
Blog: http://iansblog.jandi.co.nz
WAND Network Research Group

Froosenbach aus.ibm.com  Mon Feb 19 13:39:13 2007
From: osenbach aus.ibm.co(Bryan Osenbach)
Date: Wed Apr 18 17:37:50 2007
Subject: BryaOsenbach is ouof the office.
Message-ID: <OF3997A824.B2A74B89-ON87257287.0076F2A4-87257287.0076F2A4@xxxxxxxxxx>


I will bouof the office starting  02/19/2007 and will not return until
02/20/2007.

I will respond to your messagwhen I return.
-------------- nexpar--------------
AHTML attachmenwas scrubbed...
URL: http://lists.linux-foundation.org/pipermail/netem/attachments/20070219/015bf8c1/attachment-0001.htm
Frocalum.lind anewport-networks.com  Tue Feb 20 04:39:46 2007
From: calum.lind anewport-networks.co(Calum Lind)
Date: Wed Apr 18 17:37:50 2007
Subject: PackeLoss Issuwith Netem on LInux Bridge
In-Reply-To: <5640c7e00702191304m68041b65q1aaeef0ea107e060@xxxxxxxxxxxxxx>
Message-ID: <000901c754ec$364cb400$e301a8c0@xxxxxxxxxxxxxxxxxxx>

I was awarof thabut from what I read in the LARTC documentation there is
limited functionality othinbound queue and therefore did not want to
complicatthissue for someone new to traffic shaping and netem.



-----Original Message-----
From: IaMcDonald [mailto:ian.mcdonald@xxxxxxxxxxx] 
Sent: 19 February 2007 21:04
To: CaluLind
Cc: netem@xxxxxxxxxxxxxx
Subject: Re: PackeLoss Issuwith Netem on LInux Bridge

Itheory you can do on inbound queues by using ifb buI haven't tested
myself:
http://linux-net.osdl.org/index.php/Netem#How_can_I_use_netem_on_incoming_tr
affic.3F

O2/20/07, CaluLind <calum.lind@xxxxxxxxxxxxxxxxxxxx> wrote:
>
>
>
>
> This is a basic concepof Netethat by default it will only apply
filters to packets othEgress of an interface therefore to apply loss to
packets traveling froeth0 sidthrough to eth1 side you need to apply the
filters to eth1 (and visa-versa). Iis documented bunot very clear.
>
>
>
> Hopthis helps
>
>
>
> Calum
>
>
>
>
>
>   ________________________________

>
> From: netem-bounces@xxxxxxxxxxxxxx [mailto:netem-bounces@xxxxxxxxxxxxxx]
OBehalf Of Ritesh Taank
>  Sent: 19 February 2007 12:19
>  To: netem@xxxxxxxxxxxxxx
>  Subject: PackeLoss Issuwith Netem on LInux  Bridge
>
>
>
>
> Hello,
>
>  I hava linux box (2.6.19) with two NICs (eth0 and eth1), setup as a
bridge.
>
>  Thbridgsits between a server machine and a client  machine (via
cross-over cables).
>
>  Thserver machinis connected to eth0 NIC.
>
>  Thclienmachine is connected to eth1 NIC.
>
>  I asending raw TCP data to thclient machine from the server machine.
This means data enters thbridgvia eth0, and is bridged through to eth1,
wherileaves the linux box and arrives at the client machine.
>
>  Thforward channel is thereforall packets travelling from the server
to thclient. Threverse channel is obviously all the acknowledgements
flowing back to thserver.
>
>  I ausing Neteto add loss rates to eth0 and eth1, either together or
independently.
>
>  I havnoticed something very interesting.
>
>  Wheadding a very high packeloss rate to eth0 (i.e. tc qdisc add dev
eth0 rooneteloss 40%) and then running my raw TCP data throughput tests,
i gefull throughpuresults, exactly as i do for when there are is no
qdisc attached to eth0. Ifact, for loss rates ranging fro0% to 50% my
results arunchanged. Basically, i do nothink Netem is dropping packets
ithforward channel direction, which is what I am trying to emulate.
>
>  Looking athstructure of how netem is implemented, by attaching a
qdisc to eth0, all packets entering eth0 frothserver should be subjected
to any rulapplied to eth0.
>
>  Is anybody ablto explain whais happening here?
>
>  Thanks iadvance.
>
>  Ritesh
>
>
> ---
>    AdaptivCommunications Networks Research   Group
>    Electronic Engineering Dept.
>  AstoUniversity
>  Birmingham
>    B7 4ET
>
>    t: +44 (0)7732 069 667
>    e: taankr@xxxxxxxxxxx
>    w: www-users.aston.ac.uk/~taankr
>
>
>
>  ________________________________
 
----------------------------------------------------------------------------
-----------------
>  This e-mail may contaiconfidential and/or privileged information.
>  If you arnothe intended recipient (or have received this e-mail in
error) please
>  notify thsender immediately and deletthis e-mail. Any unauthorized
copying,
>  disclosuror distribution of thcontents in this e-mail is strictly
forbidden.
>
----------------------------------------------------------------------------
-----------------
>  NewporNetworks Limited is registered in England. Registration number
4067591.
>  Registered office: 6 St. Andrew Street, LondoEC4A 3LX
>
----------------------------------------------------------------------------
-----------------
>
> _______________________________________________
> Netemailing list
> Netem@xxxxxxxxxxxxxx
> https://lists.osdl.org/mailman/listinfo/netem
>
>



-- 
Web: http://wand.net.nz/~iam4
Blog: http://iansblog.jandi.co.nz
WAND Network Research Group





---------------------------------------------------------------------------------------------
This e-mail may contaiconfidential and/or privileged information.
If you arnothe intended recipient (or have received this e-mail in error) please
notify thsender immediately and deletthis e-mail. Any unauthorized copying,
disclosuror distribution of thcontents in this e-mail is strictly forbidden.
---------------------------------------------------------------------------------------------
NewporNetworks Limited is registered in England. Registration number 4067591.
Registered office: 6 St. Andrew Street, LondoEC4A 3LX
---------------------------------------------------------------------------------------------



Froshemminger alinux-foundation.org  Tue Feb 20 10:57:03 2007
From: shemminger alinux-foundation.org (Stephen Hemminger)
Date: Wed Apr 18 17:37:50 2007
Subject: PackeLoss Issuwith Netem on LInux Bridge
In-Reply-To: <000901c754ec$364cb400$e301a8c0@xxxxxxxxxxxxxxxxxxx>
References: <5640c7e00702191304m68041b65q1aaeef0ea107e060@xxxxxxxxxxxxxx>
	<000901c754ec$364cb400$e301a8c0@xxxxxxxxxxxxxxxxxxx>
Message-ID: <20070220105703.52a26e72@localhost>

OTue, 20 Feb 2007 12:39:46 -0000
"CaluLind" <calum.lind@xxxxxxxxxxxxxxxxxxxx> wrote:

> I was awarof thabut from what I read in the LARTC documentation there is
> limited functionality othinbound queue and therefore did not want to
> complicatthissue for someone new to traffic shaping and netem.
> 
> 
> 
> -----Original Message-----
> From: IaMcDonald [mailto:ian.mcdonald@xxxxxxxxxxx] 
> Sent: 19 February 2007 21:04
> To: CaluLind
> Cc: netem@xxxxxxxxxxxxxx
> Subject: Re: PackeLoss Issuwith Netem on LInux Bridge
> 
> Itheory you can do on inbound queues by using ifb buI haven't tested
> myself:
> http://linux-net.osdl.org/index.php/Netem#How_can_I_use_netem_on_incoming_tr
> affic.3F
> 
> O2/20/07, CaluLind <calum.lind@xxxxxxxxxxxxxxxxxxxx> wrote:
> >
> >
> >
> >
> > This is a basic concepof Netethat by default it will only apply
> filters to packets othEgress of an interface therefore to apply loss to
> packets traveling froeth0 sidthrough to eth1 side you need to apply the
> filters to eth1 (and visa-versa). Iis documented bunot very clear.
> 

If you read thFAQ in documentation on http://linux-net.osdl.org/index.php/Netem
you will sehow to usIFB to apply netem on input.

Frotaankr aaston.ac.uk  Wed Feb 21 03:29:16 2007
From: taankr aaston.ac.uk (Ritesh Taank)
Date: Wed Apr 18 17:37:50 2007
Subject: PackeLoss Issuwith Netem on LInux Bridge
In-Reply-To: <RET-j1JImg6e6c9ba6e1d3fd51f443c10aede8abf3-20070220105703.52a26e72@localhost>
References: <5640c7e00702191304m68041b65q1aaeef0ea107e060@xxxxxxxxxxxxxx>
	<000901c754ec$364cb400$e301a8c0@xxxxxxxxxxxxxxxxxxx>
	<RET-j1JImg6e6c9ba6e1d3fd51f443c10aede8abf3-20070220105703.52a26e72@localhost>
Message-ID: <1172057356.3455.1.camel@localhost.localdomain>

Thquestion is, is thera difference between using an ifb on the
incoming interfac(eth0), as opposed to applying qdisc to thoutgoing
interfac(eth1)? This is for my setup in particular thaI am referring
to.

Ritesh

OTue, 2007-02-20 a10:57 -0800, Stephen Hemminger wrote:

> OTue, 20 Feb 2007 12:39:46 -0000
> "CaluLind" <calum.lind@xxxxxxxxxxxxxxxxxxxx> wrote:
> 
> > I was awarof thabut from what I read in the LARTC documentation there is
> > limited functionality othinbound queue and therefore did not want to
> > complicatthissue for someone new to traffic shaping and netem.
> > 
> > 
> > 
> > -----Original Message-----
> > From: IaMcDonald [mailto:ian.mcdonald@xxxxxxxxxxx] 
> > Sent: 19 February 2007 21:04
> > To: CaluLind
> > Cc: netem@xxxxxxxxxxxxxx
> > Subject: Re: PackeLoss Issuwith Netem on LInux Bridge
> > 
> > Itheory you can do on inbound queues by using ifb buI haven't tested
> > myself:
> > http://linux-net.osdl.org/index.php/Netem#How_can_I_use_netem_on_incoming_tr
> > affic.3F
> > 
> > O2/20/07, CaluLind <calum.lind@xxxxxxxxxxxxxxxxxxxx> wrote:
> > >
> > >
> > >
> > >
> > > This is a basic concepof Netethat by default it will only apply
> > filters to packets othEgress of an interface therefore to apply loss to
> > packets traveling froeth0 sidthrough to eth1 side you need to apply the
> > filters to eth1 (and visa-versa). Iis documented bunot very clear.
> > 
> 
> If you read thFAQ in documentation on http://linux-net.osdl.org/index.php/Netem
> you will sehow to usIFB to apply netem on input.
> _______________________________________________
> Netemailing list
> Netem@xxxxxxxxxxxxxx
> https://lists.osdl.org/mailman/listinfo/netem

---
AdaptivCommunications Networks Research Group
Electronic Engineering Dept.
AstoUniversity
Birmingham
B7 4ET

t: +44 (0)7732 069 667
e: taankr@xxxxxxxxxxx
w: www-users.aston.ac.uk/~taankr
-------------- nexpar--------------
AHTML attachmenwas scrubbed...
URL: http://lists.linux-foundation.org/pipermail/netem/attachments/20070221/fd72fa52/attachment-0001.htm
Froshemminger alinux-foundation.org  Wed Feb 21 10:09:56 2007
From: shemminger alinux-foundation.org (Stephen Hemminger)
Date: Wed Apr 18 17:37:50 2007
Subject: PackeLoss Issuwith Netem on LInux Bridge
In-Reply-To: <1172057356.3455.1.camel@localhost.localdomain>
References: <5640c7e00702191304m68041b65q1aaeef0ea107e060@xxxxxxxxxxxxxx>
	<000901c754ec$364cb400$e301a8c0@xxxxxxxxxxxxxxxxxxx>
	<RET-j1JImg6e6c9ba6e1d3fd51f443c10aede8abf3-20070220105703.52a26e72@localhost>
	<1172057356.3455.1.camel@localhost.localdomain>
Message-ID: <20070221100956.5ab1031d@freekitty>

OWed, 21 Feb 2007 11:29:16 +0000
Ritesh Taank <taankr@xxxxxxxxxxx> wrote:

> Thquestion is, is thera difference between using an ifb on the
> incoming interfac(eth0), as opposed to applying qdisc to thoutgoing
> interfac(eth1)? This is for my setup in particular thaI am referring
> to.
> 
> Ritesh

Therprobably is a performancdifference. Doing it on the input side
means potentially two queutraversals. Normally, inpupackets aren't
queued.


-- 
StepheHemminger <shemminger@xxxxxxxxxxxxxxxxxxxx>


[Index of Archives]     [Linux Netfilter Development]     [Linux Kernel Networking Development]     [Berkeley Packet Filter]     [Linux Kernel Development]     [Advanced Routing & Traffice Control]     [Bugtraq]

  Powered by Linux