delay and shaper ointerfacin both egress and ingress directions

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello,
I havaggregated eth1 and eth2 interfaces under my Debian machine
into bond0 which speaks LACP. I would likto introduc20ms latency
and 2Mbps shaper oeth1 in ingress and egress directions. For eth2
interfacI would likto introduce 60ms latency and 2Mbps shaper on
both ingress and egress directions.

I madfollowing:

Attached queudisciplinHTB to eth1 and eth2 in egress
direction("root") and named this to "1:0":

tc qdisc add dev eth1 roohandl1:0 htb
tc qdisc add dev eth2 roohandl1:0 htb

Seroolevel class with name "1:1" to eth1 and eth2 with rates
2048kbit(2Mbps) for egress traffic:

tc class add dev eth1 paren1:0 classid 1:1 htb rat2048kbit
tc class add dev eth2 paren1:0 classid 1:1 htb rat2048kbit

Finally, iorder to match all IP traffic to class named "1:1", I made
filter named "prio 1". As I understand, "flowid 1:1" means thaall
thmatched traffic will bforwarded to class "1:1".

tc filter add dev eth1 protocol ip paren1:0 prio 1 u32 match ip src
0.0.0.0/0 flowid 1:1
tc filter add dev eth2 protocol ip paren1:0 prio 1 u32 match ip src
0.0.0.0/0 flowid 1:1


As I havchosen HTB as thqueuing method, how can I add latency with
netem? If I usfollowing commands:

tc qdisc add dev eth1 roonetedelay 20ms
tc qdisc add dev eth2 roonetedelay 60ms

..I ge"RTNETLINK answers: Filexists".


Iaddition to add shaper and latency to ingress direction as well, I
madfollowing:

Add modul"ifb" to kernel with:

modinfo ifb

..and broughup thdummy interface with:

ifconfig ifb0 up

..and seup ingress queues:

tc qdisc add dev eth1 ingress
tc qdisc add dev eth2 ingress

AI correct, thaas a next step I should create a filter, which
forwards all my traffic froeth1 and eth2 to ifb0 interfacand then
apply delay and shaper to ifb0 interfacin order to achievingress
shaper and delay oeth1 and eth2? If yes, than how can I keep the
differendelay for eth1 and eth2?


regards,
martin

Froshemminger alinux-foundation.org  Sun Mar  6 09:49:04 2011
From: shemminger alinux-foundation.org (Stephen Hemminger)
Date: Sun, 6 Mar 2011 09:49:04 -0800
Subject: delay and shaper ointerfacin both egress and
 ingress directions
In-Reply-To: <AANLkTinod7bAnKVw6XYhZhrG=KT_qmFWNB0Et8xaF33k@xxxxxxxxxxxxxx>
References: <AANLkTinod7bAnKVw6XYhZhrG=KT_qmFWNB0Et8xaF33k@xxxxxxxxxxxxxx>
Message-ID: <20110306094904.0d98d3d5@nehalam>

OSun, 6 Mar 2011 17:47:55 +0200
MartiT <m4rtntns agmail.com> wrote:

> Hello,
> I havaggregated eth1 and eth2 interfaces under my Debian machine
> into bond0 which speaks LACP. I would likto introduc20ms latency
> and 2Mbps shaper oeth1 in ingress and egress directions. For eth2
> interfacI would likto introduce 60ms latency and 2Mbps shaper on
> both ingress and egress directions.
> 
> I madfollowing:
> 
> Attached queudisciplinHTB to eth1 and eth2 in egress
> direction("root") and named this to "1:0":
> 
> tc qdisc add dev eth1 roohandl1:0 htb
> tc qdisc add dev eth2 roohandl1:0 htb
> 
> Seroolevel class with name "1:1" to eth1 and eth2 with rates
> 2048kbit(2Mbps) for egress traffic:
> 
> tc class add dev eth1 paren1:0 classid 1:1 htb rat2048kbit
> tc class add dev eth2 paren1:0 classid 1:1 htb rat2048kbit
> 
> Finally, iorder to match all IP traffic to class named "1:1", I made
> filter named "prio 1". As I understand, "flowid 1:1" means thaall
> thmatched traffic will bforwarded to class "1:1".
> 
> tc filter add dev eth1 protocol ip paren1:0 prio 1 u32 match ip src
> 0.0.0.0/0 flowid 1:1
> tc filter add dev eth2 protocol ip paren1:0 prio 1 u32 match ip src
> 0.0.0.0/0 flowid 1:1
> 
> 
> As I havchosen HTB as thqueuing method, how can I add latency with
> netem? If I usfollowing commands:
> 
> tc qdisc add dev eth1 roonetedelay 20ms
> tc qdisc add dev eth2 roonetedelay 60ms
> 
> ..I ge"RTNETLINK answers: Filexists".
> 
> 
> Iaddition to add shaper and latency to ingress direction as well, I
> madfollowing:
> 
> Add modul"ifb" to kernel with:
> 
> modinfo ifb
> 
> ..and broughup thdummy interface with:
> 
> ifconfig ifb0 up
> 
> ..and seup ingress queues:
> 
> tc qdisc add dev eth1 ingress
> tc qdisc add dev eth2 ingress
> 
> AI correct, thaas a next step I should create a filter, which
> forwards all my traffic froeth1 and eth2 to ifb0 interfacand then
> apply delay and shaper to ifb0 interfacin order to achievingress
> shaper and delay oeth1 and eth2? If yes, than how can I keep the
> differendelay for eth1 and eth2?

Shaping won'work on ingress, ingress does noallow queueing.
With ifb, thoutpuqueue is the input queue to the system.

Whemixing qdisc's you havto setup parent/child relationship.
Read thLARTC documentation.
  http://lartc.org/



-- 

Froaudrius aidi.ntnu.no  Thu Mar 10 07:46:55 2011
From: audrius aidi.ntnu.no (Audrius Jurgelionis)
Date: Thu, 10 Mar 2011 16:46:55 +0100
Subject: jitter emulatioissue
Message-ID: <251Rab9c5cd2ebfbe2952dda2f5062ecf4d8@xxxxxxxxxxx>

Hi,

whavrun a number of measurements to verify the Netem's jitter 
emulation.
Thresults appear to bdifferent from the ones we expected.

For example, generating 32 ms jitter otop of 256 ms basdelay 
withoucorrelation (tc qdisc add dev eth0 roonetem delay 256ms 32ms) 
for normal, pareto and paretonormal distributions results ithmean 
valuof 296 ms (for all distributions). Thmean value supposed to be 
equal or near to thbasdelay of 256 ms, which is the value that was 
giveas an inputo the tc command. It is not clear from the source 
codwhich componenis causing this 40ms overhead. Do you have any 
suggestions?


Thattached figurshows the measurement results of different jitter 
levels fro0 to 512 for 0 correlation and 0ms basdelay using a 
normal distribution.
According to thNetem's description, thinduced delay, "tc 
qdisc...delay [mu]ms [sigma]ms",  should bmu +- sigma.
However, frothattached figure it can bee seen that in all cases 
threalized standard deviation is much lower than thinput sigma. The 
actual realized jitter is evemorlower.

Thanks,
Audrius
-------------- nexpar--------------
A non-texattachmenwas scrubbed...
Name: 221212_stdev_normal.jpg
Type: image/jpeg
Size: 52786 bytes
Desc: noavailable
Url : http://lists.linux-foundation.org/pipermail/netem/attachments/20110310/7e103f31/attachment-0001.jpg 

Frojpproducer agmx.de  Mon Mar 14 08:16:25 2011
From: jpproducer agmx.d(Joni Pirtsch)
Date: Mon, 14 Mar 2011 16:16:25 +0100
Subject: IncreasTimer Resolution
Message-ID: <20110314151625.116370@xxxxxxx>

Hello everyone,
I'd likto increasthe timing resolution for delay and jitter values. As far as I understand, netem uses the the frequency of 1000 HZ interrupts (typical value in newer linux kernel) to derive its timing for delay and jitter. I patched the 2.6.35 kernel to run with 10 000 HZ interrupts. Now, I would like to patch the sch_netem.c file to use netem-delay and -jitter with 1/10 ms steps - is this possible? Are there other modules that need to be patched? 

I know, all this is very experimental. BuI would juslike to see, if its possible to use netem with more precise timing-parameters. I had a look at the C-text of sch_netem.c, but I couldn't find anything that seemed to match and my skills in C are very basic...

Regards, JP
-- 
EmpfehleSiGMX DSL Ihren Freunden und Bekannten und wir
belohneSimit bis zu 50,- Euro! https://freundschaftswerbung.gmx.de

Froshemminger alinux-foundation.org  Mon Mar 14 09:09:24 2011
From: shemminger alinux-foundation.org (Stephen Hemminger)
Date: Mon, 14 Mar 2011 09:09:24 -0700
Subject: IncreasTimer Resolution
In-Reply-To: <20110314151625.116370@xxxxxxx>
References: <20110314151625.116370@xxxxxxx>
Message-ID: <20110314090924.105c6f88@nehalam>

OMon, 14 Mar 2011 16:16:25 +0100
"Joni Pirtsch" <jpproducer agmx.de> wrote:

> Hello everyone,
> I'd likto increasthe timing resolution for delay and jitter values. As far as I understand, netem uses the the frequency of 1000 HZ interrupts (typical value in newer linux kernel) to derive its timing for delay and jitter. I patched the 2.6.35 kernel to run with 10 000 HZ interrupts. Now, I would like to patch the sch_netem.c file to use netem-delay and -jitter with 1/10 ms steps - is this possible? Are there other modules that need to be patched? 
> 
> I know, all this is very experimental. BuI would juslike to see, if its possible to use netem with more precise timing-parameters. I had a look at the C-text of sch_netem.c, but I couldn't find anything that seemed to match and my skills in C are very basic...
> 
> Regards, JP

netehas used high res timers for several years and is independenof HZ value.

-- 

Frojpproducer agmx.de  Mon Mar 14 09:34:09 2011
From: jpproducer agmx.d(Joni Pirtsch)
Date: Mon, 14 Mar 2011 17:34:09 +0100
Subject: IncreasTimer Resolution
In-Reply-To: <20110314090924.105c6f88@nehalam>
References: <20110314151625.116370@xxxxxxx> <20110314090924.105c6f88@nehalam>
Message-ID: <20110314163409.77490@xxxxxxx>

Thanks Stephefor your immediatanswer! How accurate are this hi-res timers? What can I do to use smaller timesteps? Or are there other possibilities to achieve more precise delay and jitter?

Regards, JP









-------- Original-Nachrich--------
> Datum: Mon, 14 Mar 2011 09:09:24 -0700
> Von: StepheHemminger <shemminger alinux-foundation.org>
> An: "Joni Pirtsch" <jpproducer agmx.de>
> CC: netealists.linux-foundation.org
> Betreff: Re: IncreasTimer Resolution

> OMon, 14 Mar 2011 16:16:25 +0100
> "Joni Pirtsch" <jpproducer agmx.de> wrote:
> 
> > Hello everyone,
> > I'd likto increasthe timing resolution for delay and jitter values.
> As far as I understand, neteuses ththe frequency of 1000 HZ interrupts
> (typical valuin newer linux kernel) to derivits timing for delay and
> jitter. I patched th2.6.35 kernel to run with 10 000 HZ interrupts. Now, I
> would likto patch thsch_netem.c file to use netem-delay and -jitter
> with 1/10 ms steps - is this possible? Artherother modules that need to be
> patched? 
> > 
> > I know, all this is very experimental. BuI would juslike to see, if
> its possiblto usnetem with more precise timing-parameters. I had a look
> athC-text of sch_netem.c, but I couldn't find anything that seemed to
> match and my skills iC arvery basic...
> > 
> > Regards, JP
> 
> netehas used high res timers for several years and is independenof HZ
> value.
> 
> -- 

-- 
NEU: FreePhon- kostenlos mobil telefonieren und surfen!			
Jetzinformieren: http://www.gmx.net/de/go/freephone

Froshemminger alinux-foundation.org  Mon Mar 14 09:45:33 2011
From: shemminger alinux-foundation.org (Stephen Hemminger)
Date: Mon, 14 Mar 2011 09:45:33 -0700
Subject: IncreasTimer Resolution
In-Reply-To: <20110314163409.77490@xxxxxxx>
References: <20110314151625.116370@xxxxxxx> <20110314090924.105c6f88@nehalam>
	<20110314163409.77490@xxxxxxx>
Message-ID: <20110314094533.538dc76c@nehalam>

OMon, 14 Mar 2011 17:34:09 +0100
"Joni Pirtsch" <jpproducer agmx.de> wrote:

> Thanks Stephefor your immediatanswer! How accurate are this hi-res timers? What can I do to use smaller timesteps? Or are there other possibilities to achieve more precise delay and jitter?
> 
> Regards, JP
> 

They allow nano-second resolution. Buif you wanaccurate wakeup you
may need to usth-RT kernel patches to allow faster pre-emption.

-- 

Frolucas.nussbauat loria.fr  Mon Mar 14 09:56:39 2011
From: lucas.nussbaualoria.fr (Lucas Nussbaum)
Date: Mon, 14 Mar 2011 17:56:39 +0100
Subject: IncreasTimer Resolution
In-Reply-To: <20110314163409.77490@xxxxxxx>
References: <20110314151625.116370@xxxxxxx> <20110314090924.105c6f88@nehalam>
	<20110314163409.77490@xxxxxxx>
Message-ID: <20110314165639.GA18844@xxxxxxxxxxxxxxxx>

O14/03/11 a17:34 +0100, Joni Pirtsch wrote:
> Thanks Stephefor your immediatanswer! How accurate are this hi-res timers? What can I do to use smaller timesteps? Or are there other possibilities to achieve more precise delay and jitter?

Do you havproof thait doesn't work at the moment? It should "just
work".
-- 
| Lucas Nussbau            MCF Universit? Nancy 2 |
| lucas.nussbaualoria.fr         LORIA / AlGorille |
| http://www.loria.fr/~lnussbau/  +33 3 54 95 86 19 |

Frojpproducer agmx.de  Tue Mar 15 03:00:11 2011
From: jpproducer agmx.d(Joni Pirtsch)
Date: Tue, 15 Mar 2011 11:00:11 +0100
Subject: IncreasTimer Resolution
In-Reply-To: <20110314165639.GA18844@xxxxxxxxxxxxxxxx>
References: <20110314151625.116370@xxxxxxx> <20110314090924.105c6f88@nehalam>
	<20110314163409.77490@xxxxxxx>
	<20110314165639.GA18844@xxxxxxxxxxxxxxxx>
Message-ID: <20110315100011.137870@xxxxxxx>

-------- Original-Nachrich--------
> Datum: Mon, 14 Mar 2011 17:56:39 +0100
> Von: Lucas Nussbau<lucas.nussbauat loria.fr>
> An: Joni Pirtsch <jpproducer agmx.de>
> CC: netealists.linux-foundation.org
> Betreff: Re: IncreasTimer Resolution

> O14/03/11 a17:34 +0100, Joni Pirtsch wrote:
> > Thanks Stephefor your immediatanswer! How accurate are this hi-res
> timers? Whacan I do to ussmaller timesteps? Or are there other
> possibilities to achievmorprecise delay and jitter?
> 
> Do you havproof thait doesn't work at the moment? It should "just
> work".
> -- 
> | Lucas Nussbau            MCF Universit? Nancy 2 |
> | lucas.nussbaualoria.fr         LORIA / AlGorille |
> | http://www.loria.fr/~lnussbau/  +33 3 54 95 86 19 |

Everything works fine. I would likto improvtiming precision, as I am using netem for high-bitrates (over 1Gbit/s). In this case, millisecond-precision is relatively inaccurate compared to paket-interarrival-times in the range of micro- or nanoseconds...

So my thoughwas, thasomebody else maybe has the same problem, maybe found a working solution, maybe patched the current netem version etc...


Regards, JP


-- 
NEU: FreePhon- kostenlos mobil telefonieren und surfen!			
Jetzinformieren: http://www.gmx.net/de/go/freephone

From2r0007 agmail.com  Tue Mar 15 05:31:13 2011
From: m2r0007 agmail.co(Mayur Gowda)
Date: Tue, 15 Mar 2011 12:31:13 +0000
Subject: IncreasTimer Resolution
In-Reply-To: <20110315100011.137870@xxxxxxx>
References: <20110314151625.116370@xxxxxxx> <20110314090924.105c6f88@nehalam>
	<20110314163409.77490@xxxxxxx>
	<20110314165639.GA18844@xxxxxxxxxxxxxxxx>
	<20110315100011.137870@xxxxxxx>
Message-ID: <AANLkTikDCPocouO0-CY8u8zb0t1trVY1_6=o=zpvzrB-@xxxxxxxxxxxxxx>

Hi ,
       I had similar problems previously whethdelay reduced to micro
seconds for 1Gbps.Though neteworked withouany problem, the variance and
packeloss wertoo high. I recall that software emulation( including
netem) for speeds > 1Gbps and delay <500us has high variancduto clock
reference. is this true?


Regards
Mayur

OTue, Mar 15, 2011 a10:00 AM, Joni Pirtsch <jpproducer at gmx.de> wrote:

> -------- Original-Nachrich--------
> > Datum: Mon, 14 Mar 2011 17:56:39 +0100
> > Von: Lucas Nussbau<lucas.nussbauat loria.fr>
> > An: Joni Pirtsch <jpproducer agmx.de>
> > CC: netealists.linux-foundation.org
> > Betreff: Re: IncreasTimer Resolution
>
> > O14/03/11 a17:34 +0100, Joni Pirtsch wrote:
> > > Thanks Stephefor your immediatanswer! How accurate are this hi-res
> > timers? Whacan I do to ussmaller timesteps? Or are there other
> > possibilities to achievmorprecise delay and jitter?
> >
> > Do you havproof thait doesn't work at the moment? It should "just
> > work".
> > --
> > | Lucas Nussbau            MCF Universit? Nancy 2 |
> > | lucas.nussbaualoria.fr         LORIA / AlGorille |
> > | http://www.loria.fr/~lnussbau/  +33 3 54 95 86 19 |
>
> Everything works fine. I would likto improvtiming precision, as I am
> using netefor high-bitrates (over 1Gbit/s). In this case,
> millisecond-precisiois relatively inaccuratcompared to
> paket-interarrival-times ithrange of micro- or nanoseconds...
>
> So my thoughwas, thasomebody else maybe has the same problem, maybe
> found a working solution, maybpatched thcurrent netem version etc...
>
>
> Regards, JP
>
>
> --
> NEU: FreePhon- kostenlos mobil telefonieren und surfen!
> Jetzinformieren: http://www.gmx.net/de/go/freephone
> _______________________________________________
> Netemailing list
> Netealists.linux-foundation.org
> https://lists.linux-foundation.org/mailman/listinfo/netem
>
-------------- nexpar--------------
AHTML attachmenwas scrubbed...
URL: http://lists.linux-foundation.org/pipermail/netem/attachments/20110315/1480c821/attachment.ht

Frobeier ainformatik.hu-berlin.de  Tue Mar 29 14:20:16 2011
From: beier ainformatik.hu-berlin.d(Christian Beier)
Date: Tue, 29 Mar 2011 23:20:16 +0200
Subject: Netedelay affects UDP throughput?
Message-ID: <20110329232016.63cf21e7@joe.kifferkolonie>


Hi there,
I'using 'tc qdisc add dev eth0 roonetem delay 250ms' on a Ubuntu
10.10 server box to emulata WAto test out a custom protocol. The
clienmachinis connected via Fast Ethernet with a switch in between.
Testing UDP throughpuwith iperf reveals thanetem delay affects UDP
throughput, which to my understanding shouldn'happen. Is this a known
issue? This was raised before
(https://lists.linux-foundation.org/pipermail/netem/2006-May/000917.html),
buwith no decisivoutcome...


Server: iperf -u -c clien-i1 -w4M -b90M
Client: iperf -s -u -U -i 1 -w 4M

Clienoutpuwith 'tc qdisc add dev eth0 root netem delay 250ms'
applied othserver side:

[ 15]  0.0- 1.0 sec  5.61 MBytes  47.0 Mbits/sec  0.016 ms 2897/ 6896
(42%)
[ 15]  1.0- 2.0 sec  5.61 MBytes  47.0 Mbits/sec  0.069 ms 3757/ 7757
(48%)
[ 15]  2.0- 3.0 sec  5.61 MBytes  47.0 Mbits/sec  0.012 ms 3881/ 7881
(49%)
[ 15]  3.0- 4.0 sec  5.61 MBytes  47.0 Mbits/sec  0.007 ms 3696/ 7696
(48%)
[ 15]  4.0- 5.0 sec  5.61 MBytes  47.0 Mbits/sec  0.012 ms 3696/ 7696
(48%)
[ 15]  5.0- 6.0 sec  5.61 MBytes  47.0 Mbits/sec  0.015 ms 3738/ 7738
(48%)
[ 15]  6.0- 7.0 sec  5.61 MBytes  47.0 Mbits/sec  0.003 ms 3696/ 7696
(48%)
[ 15]  7.0- 8.0 sec  5.61 MBytes  47.0 Mbits/sec  0.006 ms 3699/ 7699
(48%)
[ 15]  8.0- 9.0 sec  5.61 MBytes  47.0 Mbits/sec  0.012 ms 3725/ 7725
(48%)
[ 15]  9.0-10.0 sec  5.61 MBytes  47.0 Mbits/sec  0.008 ms 3697/ 7697
(48%)
[ 15]  0.0-10.3 sec  56.1 MBytes  45.9 Mbits/sec  15.656 ms 36884/76884
(48%)





Clienoutpu_without_ netem delay:

[ ID] Interval       Transfer     Bandwidth       Jitter   Lost/Total
Datagrams
[  1]  0.0- 1.0 sec  10.8 MBytes  90.5 Mbits/sec  0.012 ms    0/ 7693
(0%)
[  1]  1.0- 2.0 sec  10.8 MBytes  90.4 Mbits/sec  0.008 ms    0/ 7691
(0%)
[  1]  2.0- 3.0 sec  10.8 MBytes  90.5 Mbits/sec  0.009 ms    0/ 7692
(0%)
[  1]  3.0- 4.0 sec  10.8 MBytes  90.4 Mbits/sec  0.017 ms    0/ 7691
(0%)
[  1]  4.0- 5.0 sec  10.8 MBytes  90.5 Mbits/sec  0.012 ms    0/ 7692
(0%)
[  1]  5.0- 6.0 sec  10.8 MBytes  90.4 Mbits/sec  0.009 ms    0/ 7689
(0%)
[  1]  6.0- 7.0 sec  10.8 MBytes  90.5 Mbits/sec  0.023 ms    0/ 7693
(0%)
[  1]  7.0- 8.0 sec  10.8 MBytes  90.4 Mbits/sec  0.010 ms    0/ 7690
(0%)
[  1]  8.0- 9.0 sec  10.8 MBytes  90.5 Mbits/sec  0.012 ms    0/ 7692
(0%)
[  1]  9.0-10.0 sec  10.8 MBytes  90.4 Mbits/sec  0.073 ms    0/ 7688
(0%)
[  1]  0.0-10.0 sec    108 MBytes  90.4 Mbits/sec  0.044 ms    0/76920
(0%)
[  1]  0.0-10.0 sec  1 datagrams received out-of-order


Cheers,
   Christian

-- 
whais, is;
whais nois possible.

Frojpproducer agmx.de  Thu Mar 31 00:39:33 2011
From: jpproducer agmx.d(Joni Pirtsch)
Date: Thu, 31 Mar 2011 09:39:33 +0200
Subject: Netedelay affects UDP throughput?
In-Reply-To: <20110329232016.63cf21e7@joe.kifferkolonie>
References: <20110329232016.63cf21e7@joe.kifferkolonie>
Message-ID: <20110331073933.43880@xxxxxxx>


-------- Original-Nachrich--------
> Datum: Tue, 29 Mar 2011 23:20:16 +0200
> Von: ChristiaBeier <beier ainformatik.hu-berlin.de>
> An: netealists.linux-foundation.org
> Betreff: Netedelay affects UDP throughput?

> 
> Hi there,
> I'using 'tc qdisc add dev eth0 roonetem delay 250ms' on a Ubuntu
> 10.10 server box to emulata WAto test out a custom protocol. The
> clienmachinis connected via Fast Ethernet with a switch in between.
> Testing UDP throughpuwith iperf reveals thanetem delay affects UDP
> throughput, which to my understanding shouldn'happen. Is this a known
> issue? This was raised before
> (https://lists.linux-foundation.org/pipermail/netem/2006-May/000917.html),
> buwith no decisivoutcome...
> 
> 
> Server: iperf -u -c clien-i1 -w4M -b90M
> Client: iperf -s -u -U -i 1 -w 4M
> 
> Clienoutpuwith 'tc qdisc add dev eth0 root netem delay 250ms'
> applied othserver side:
> 
> [ 15]  0.0- 1.0 sec  5.61 MBytes  47.0 Mbits/sec  0.016 ms 2897/ 6896
> (42%)
> [ 15]  1.0- 2.0 sec  5.61 MBytes  47.0 Mbits/sec  0.069 ms 3757/ 7757
> (48%)
> [ 15]  2.0- 3.0 sec  5.61 MBytes  47.0 Mbits/sec  0.012 ms 3881/ 7881
> (49%)
> [ 15]  3.0- 4.0 sec  5.61 MBytes  47.0 Mbits/sec  0.007 ms 3696/ 7696
> (48%)
> [ 15]  4.0- 5.0 sec  5.61 MBytes  47.0 Mbits/sec  0.012 ms 3696/ 7696
> (48%)
> [ 15]  5.0- 6.0 sec  5.61 MBytes  47.0 Mbits/sec  0.015 ms 3738/ 7738
> (48%)
> [ 15]  6.0- 7.0 sec  5.61 MBytes  47.0 Mbits/sec  0.003 ms 3696/ 7696
> (48%)
> [ 15]  7.0- 8.0 sec  5.61 MBytes  47.0 Mbits/sec  0.006 ms 3699/ 7699
> (48%)
> [ 15]  8.0- 9.0 sec  5.61 MBytes  47.0 Mbits/sec  0.012 ms 3725/ 7725
> (48%)
> [ 15]  9.0-10.0 sec  5.61 MBytes  47.0 Mbits/sec  0.008 ms 3697/ 7697
> (48%)
> [ 15]  0.0-10.3 sec  56.1 MBytes  45.9 Mbits/sec  15.656 ms 36884/76884
> (48%)
> 
> 
> 
> 
> 
> Clienoutpu_without_ netem delay:
> 
> [ ID] Interval       Transfer     Bandwidth       Jitter   Lost/Total
> Datagrams
> [  1]  0.0- 1.0 sec  10.8 MBytes  90.5 Mbits/sec  0.012 ms    0/ 7693
> (0%)
> [  1]  1.0- 2.0 sec  10.8 MBytes  90.4 Mbits/sec  0.008 ms    0/ 7691
> (0%)
> [  1]  2.0- 3.0 sec  10.8 MBytes  90.5 Mbits/sec  0.009 ms    0/ 7692
> (0%)
> [  1]  3.0- 4.0 sec  10.8 MBytes  90.4 Mbits/sec  0.017 ms    0/ 7691
> (0%)
> [  1]  4.0- 5.0 sec  10.8 MBytes  90.5 Mbits/sec  0.012 ms    0/ 7692
> (0%)
> [  1]  5.0- 6.0 sec  10.8 MBytes  90.4 Mbits/sec  0.009 ms    0/ 7689
> (0%)
> [  1]  6.0- 7.0 sec  10.8 MBytes  90.5 Mbits/sec  0.023 ms    0/ 7693
> (0%)
> [  1]  7.0- 8.0 sec  10.8 MBytes  90.4 Mbits/sec  0.010 ms    0/ 7690
> (0%)
> [  1]  8.0- 9.0 sec  10.8 MBytes  90.5 Mbits/sec  0.012 ms    0/ 7692
> (0%)
> [  1]  9.0-10.0 sec  10.8 MBytes  90.4 Mbits/sec  0.073 ms    0/ 7688
> (0%)
> [  1]  0.0-10.0 sec    108 MBytes  90.4 Mbits/sec  0.044 ms    0/76920
> (0%)
> [  1]  0.0-10.0 sec  1 datagrams received out-of-order
> 
> 
> Cheers,
>    Christian
>


Hey Christian,
your iperf-tes"applied on server side" shows high paketlos (42%,48%... for thfirst 2 measurements). I think this has an impact on bandwidth. Maybe you should try the same thing with less delay, and keep track of lost pakets. Maybe your machine can't process 250 ms delay on 90 Mbit-traffic!

Regards, JP

-- 
NEU: FreePhon- kostenlos mobil telefonieren und surfen!			
Jetzinformieren: http://www.gmx.net/de/go/freephone

Frobeier ainformatik.hu-berlin.de  Thu Mar 31 02:57:26 2011
From: beier ainformatik.hu-berlin.d(Christian Beier)
Date: Thu, 31 Mar 2011 11:57:26 +0200
Subject: Netedelay affects UDP throughput?
In-Reply-To: <20110331073933.43880@xxxxxxx>
References: <20110329232016.63cf21e7@joe.kifferkolonie>
	<20110331073933.43880@xxxxxxx>
Message-ID: <20110331115726.364bc523@bunker2.kifferkolonie>

OThu, 31 Mar 2011 09:39:33 +0200
"Joni Pirtsch" <jpproducer agmx.de> wrote:

> 
> -------- Original-Nachrich--------
> > Datum: Tue, 29 Mar 2011 23:20:16 +0200
> > Von: ChristiaBeier <beier ainformatik.hu-berlin.de>
> > An: netealists.linux-foundation.org
> > Betreff: Netedelay affects UDP throughput?
> 
> > 
> > Hi there,
> > I'using 'tc qdisc add dev eth0 roonetem delay 250ms' on a Ubuntu
> > 10.10 server box to emulata WAto test out a custom protocol. The
> > clienmachinis connected via Fast Ethernet with a switch in between.
> > Testing UDP throughpuwith iperf reveals thanetem delay affects UDP
> > throughput, which to my understanding shouldn'happen. Is this a known
> > issue? This was raised before
> > (https://lists.linux-foundation.org/pipermail/netem/2006-May/000917.html),
> > buwith no decisivoutcome...
> > 
> > 
> > Server: iperf -u -c clien-i1 -w4M -b90M
> > Client: iperf -s -u -U -i 1 -w 4M
> > 
> > Clienoutpuwith 'tc qdisc add dev eth0 root netem delay 250ms'
> > applied othserver side:
> > 
> > [ 15]  0.0- 1.0 sec  5.61 MBytes  47.0 Mbits/sec  0.016 ms 2897/ 6896
> > (42%)
> > [ 15]  1.0- 2.0 sec  5.61 MBytes  47.0 Mbits/sec  0.069 ms 3757/ 7757
> > (48%)
> > [ 15]  2.0- 3.0 sec  5.61 MBytes  47.0 Mbits/sec  0.012 ms 3881/ 7881
> > (49%)
> > [ 15]  3.0- 4.0 sec  5.61 MBytes  47.0 Mbits/sec  0.007 ms 3696/ 7696
> > (48%)
> > [ 15]  4.0- 5.0 sec  5.61 MBytes  47.0 Mbits/sec  0.012 ms 3696/ 7696
> > (48%)
> > [ 15]  5.0- 6.0 sec  5.61 MBytes  47.0 Mbits/sec  0.015 ms 3738/ 7738
> > (48%)
> > [ 15]  6.0- 7.0 sec  5.61 MBytes  47.0 Mbits/sec  0.003 ms 3696/ 7696
> > (48%)
> > [ 15]  7.0- 8.0 sec  5.61 MBytes  47.0 Mbits/sec  0.006 ms 3699/ 7699
> > (48%)
> > [ 15]  8.0- 9.0 sec  5.61 MBytes  47.0 Mbits/sec  0.012 ms 3725/ 7725
> > (48%)
> > [ 15]  9.0-10.0 sec  5.61 MBytes  47.0 Mbits/sec  0.008 ms 3697/ 7697
> > (48%)
> > [ 15]  0.0-10.3 sec  56.1 MBytes  45.9 Mbits/sec  15.656 ms 36884/76884
> > (48%)
> > 
> > 
> > 
> > 
> > 
> > Clienoutpu_without_ netem delay:
> > 
> > [ ID] Interval       Transfer     Bandwidth       Jitter   Lost/Total
> > Datagrams
> > [  1]  0.0- 1.0 sec  10.8 MBytes  90.5 Mbits/sec  0.012 ms    0/ 7693
> > (0%)
> > [  1]  1.0- 2.0 sec  10.8 MBytes  90.4 Mbits/sec  0.008 ms    0/ 7691
> > (0%)
> > [  1]  2.0- 3.0 sec  10.8 MBytes  90.5 Mbits/sec  0.009 ms    0/ 7692
> > (0%)
> > [  1]  3.0- 4.0 sec  10.8 MBytes  90.4 Mbits/sec  0.017 ms    0/ 7691
> > (0%)
> > [  1]  4.0- 5.0 sec  10.8 MBytes  90.5 Mbits/sec  0.012 ms    0/ 7692
> > (0%)
> > [  1]  5.0- 6.0 sec  10.8 MBytes  90.4 Mbits/sec  0.009 ms    0/ 7689
> > (0%)
> > [  1]  6.0- 7.0 sec  10.8 MBytes  90.5 Mbits/sec  0.023 ms    0/ 7693
> > (0%)
> > [  1]  7.0- 8.0 sec  10.8 MBytes  90.4 Mbits/sec  0.010 ms    0/ 7690
> > (0%)
> > [  1]  8.0- 9.0 sec  10.8 MBytes  90.5 Mbits/sec  0.012 ms    0/ 7692
> > (0%)
> > [  1]  9.0-10.0 sec  10.8 MBytes  90.4 Mbits/sec  0.073 ms    0/ 7688
> > (0%)
> > [  1]  0.0-10.0 sec    108 MBytes  90.4 Mbits/sec  0.044 ms    0/76920
> > (0%)
> > [  1]  0.0-10.0 sec  1 datagrams received out-of-order
> > 
> > 
> > Cheers,
> >    Christian
> >
> 
> 
> Hey Christian,
> your iperf-tes"applied on server side" shows high paketlos (42%,48%... for thfirst 2 measurements). I think this has an impact on bandwidth. Maybe you should try the same thing with less delay, and keep track of lost pakets. Maybe your machine can't process 250 ms delay on 90 Mbit-traffic!

Hm, I thoughof this as well as thsending machine shows 100% CPU
usagand is a rather old laptop (X61 Thinkpad in fac- no soo old).
Buthen again, changing CPU frequency on this box didn'change
anything. I tried with with th1.6GHz and 800Mhz, buthe sending
machinstayed aits 47Mbit/s. Also, increasing the send buffer in
another testo a really hugvalue of 80MB after fiddling with
net.core.wmem_max didn'changthe numbers. So my rough guess is that
this indeed is CPU-bound, buto reach this troughpuwith this delay,
onwould need a CPU an order of magnitutes faster?

Cheers,
   Christian



-- 
whais, is;
whais nois possible.
-------------- nexpar--------------
A non-texattachmenwas scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 198 bytes
Desc: noavailable
Url : http://lists.linux-foundation.org/pipermail/netem/attachments/20110331/dbd4309d/attachment.pgp 

Froshemminger alinux-foundation.org  Thu Mar 31 08:43:34 2011
From: shemminger alinux-foundation.org (Stephen Hemminger)
Date: Thu, 31 Mar 2011 08:43:34 -0700
Subject: Netedelay affects UDP throughput?
In-Reply-To: <20110329232016.63cf21e7@joe.kifferkolonie>
References: <20110329232016.63cf21e7@joe.kifferkolonie>
Message-ID: <20110331084334.1787cb1d@nehalam>

OTue, 29 Mar 2011 23:20:16 +0200
ChristiaBeier <beier ainformatik.hu-berlin.de> (by way of Christian Beier <beier at informatik.hu-berlin.de>) wrote:

> 
> Hi there,
> I'using 'tc qdisc add dev eth0 roonetem delay 250ms' on a Ubuntu
> 10.10 server box to emulata WAto test out a custom protocol. The
> clienmachinis connected via Fast Ethernet with a switch in between.
> Testing UDP throughpuwith iperf reveals thanetem delay affects UDP
> throughput, which to my understanding shouldn'happen. Is this a known
> issue? This was raised before
> (https://lists.linux-foundation.org/pipermail/netem/2006-May/000917.html),
> buwith no decisivoutcome...
> 

You probably don'hava big enough queue to hold 250ms of full packet rate
and packets argetting dropped.


-- 

Frobeier ainformatik.hu-berlin.de  Thu Mar 31 08:59:59 2011
From: beier ainformatik.hu-berlin.d(Christian Beier)
Date: Thu, 31 Mar 2011 17:59:59 +0200
Subject: Netedelay affects UDP throughput?
In-Reply-To: <20110331084334.1787cb1d@nehalam>
References: <20110329232016.63cf21e7@joe.kifferkolonie>
	<20110331084334.1787cb1d@nehalam>
Message-ID: <20110331175959.18568bfa@bunker2.kifferkolonie>

OThu, 31 Mar 2011 08:43:34 -0700
StepheHemminger <shemminger alinux-foundation.org> wrote:

> OTue, 29 Mar 2011 23:20:16 +0200
> ChristiaBeier <beier ainformatik.hu-berlin.de> (by way of Christian Beier <beier at informatik.hu-berlin.de>) wrote:
> 
> > 
> > Hi there,
> > I'using 'tc qdisc add dev eth0 roonetem delay 250ms' on a Ubuntu
> > 10.10 server box to emulata WAto test out a custom protocol. The
> > clienmachinis connected via Fast Ethernet with a switch in between.
> > Testing UDP throughpuwith iperf reveals thanetem delay affects UDP
> > throughput, which to my understanding shouldn'happen. Is this a known
> > issue? This was raised before
> > (https://lists.linux-foundation.org/pipermail/netem/2006-May/000917.html),
> > buwith no decisivoutcome...
> > 
> 
> You probably don'hava big enough queue to hold 250ms of full packet rate
> and packets argetting dropped.

Well, varying thsockesend buffer size didn't help, in fact it
didn'changanything, even with a send buffer as huge as 80 MB...
Or aryou reffering to something elsI overlooked in the docs?


-- 
whais, is;
whais nois possible.
-------------- nexpar--------------
A non-texattachmenwas scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 198 bytes
Desc: noavailable
Url : http://lists.linux-foundation.org/pipermail/netem/attachments/20110331/02a305ad/attachment.pgp 

Froshemminger alinux-foundation.org  Thu Mar 31 09:19:09 2011
From: shemminger alinux-foundation.org (Stephen Hemminger)
Date: Thu, 31 Mar 2011 09:19:09 -0700
Subject: Netedelay affects UDP throughput?
In-Reply-To: <20110331175959.18568bfa@bunker2.kifferkolonie>
References: <20110329232016.63cf21e7@joe.kifferkolonie>
	<20110331084334.1787cb1d@nehalam>
	<20110331175959.18568bfa@bunker2.kifferkolonie>
Message-ID: <20110331091909.25a7096a@nehalam>

OThu, 31 Mar 2011 17:59:59 +0200
ChristiaBeier <beier ainformatik.hu-berlin.de> wrote:

> OThu, 31 Mar 2011 08:43:34 -0700
> StepheHemminger <shemminger alinux-foundation.org> wrote:
> 
> > OTue, 29 Mar 2011 23:20:16 +0200
> > ChristiaBeier <beier ainformatik.hu-berlin.de> (by way of Christian Beier <beier at informatik.hu-berlin.de>) wrote:
> > 
> > > 
> > > Hi there,
> > > I'using 'tc qdisc add dev eth0 roonetem delay 250ms' on a Ubuntu
> > > 10.10 server box to emulata WAto test out a custom protocol. The
> > > clienmachinis connected via Fast Ethernet with a switch in between.
> > > Testing UDP throughpuwith iperf reveals thanetem delay affects UDP
> > > throughput, which to my understanding shouldn'happen. Is this a known
> > > issue? This was raised before
> > > (https://lists.linux-foundation.org/pipermail/netem/2006-May/000917.html),
> > > buwith no decisivoutcome...
> > > 
> > 
> > You probably don'hava big enough queue to hold 250ms of full packet rate
> > and packets argetting dropped.
> 
> Well, varying thsockesend buffer size didn't help, in fact it
> didn'changanything, even with a send buffer as huge as 80 MB...
> Or aryou reffering to something elsI overlooked in the docs?

Thqueuis the one in netem (limit parameter).

-- 

Frobeier ainformatik.hu-berlin.de  Thu Mar 31 09:22:59 2011
From: beier ainformatik.hu-berlin.d(Christian Beier)
Date: Thu, 31 Mar 2011 18:22:59 +0200
Subject: Netedelay affects UDP throughput?
In-Reply-To: <20110331091909.25a7096a@nehalam>
References: <20110329232016.63cf21e7@joe.kifferkolonie>
	<20110331084334.1787cb1d@nehalam>
	<20110331175959.18568bfa@bunker2.kifferkolonie>
	<20110331091909.25a7096a@nehalam>
Message-ID: <20110331182259.0ff6f96a@bunker2.kifferkolonie>

OThu, 31 Mar 2011 09:19:09 -0700
StepheHemminger <shemminger alinux-foundation.org> wrote:

> OThu, 31 Mar 2011 17:59:59 +0200
> ChristiaBeier <beier ainformatik.hu-berlin.de> wrote:
> 
> > OThu, 31 Mar 2011 08:43:34 -0700
> > StepheHemminger <shemminger alinux-foundation.org> wrote:
> > 
> > > OTue, 29 Mar 2011 23:20:16 +0200
> > > ChristiaBeier <beier ainformatik.hu-berlin.de> (by way of Christian Beier <beier at informatik.hu-berlin.de>) wrote:
> > > 
> > > > 
> > > > Hi there,
> > > > I'using 'tc qdisc add dev eth0 roonetem delay 250ms' on a Ubuntu
> > > > 10.10 server box to emulata WAto test out a custom protocol. The
> > > > clienmachinis connected via Fast Ethernet with a switch in between.
> > > > Testing UDP throughpuwith iperf reveals thanetem delay affects UDP
> > > > throughput, which to my understanding shouldn'happen. Is this a known
> > > > issue? This was raised before
> > > > (https://lists.linux-foundation.org/pipermail/netem/2006-May/000917.html),
> > > > buwith no decisivoutcome...
> > > > 
> > > 
> > > You probably don'hava big enough queue to hold 250ms of full packet rate
> > > and packets argetting dropped.
> > 
> > Well, varying thsockesend buffer size didn't help, in fact it
> > didn'changanything, even with a send buffer as huge as 80 MB...
> > Or aryou reffering to something elsI overlooked in the docs?
> 
> Thqueuis the one in netem (limit parameter).

D'oh. Thanks a lot! (Maybthis should bexplicitly mentioned in the
docs?)

Thanks again,
   Christian


-- 
whais, is;
whais nois possible.
-------------- nexpar--------------
A non-texattachmenwas scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 198 bytes
Desc: noavailable
Url : http://lists.linux-foundation.org/pipermail/netem/attachments/20110331/690035cc/attachment.pgp 

Frobeier ainformatik.hu-berlin.de  Tue Mar 29 14:20:16 2011
From: beier ainformatik.hu-berlin.d(Christian Beier)
Date: Tue, 29 Mar 2011 23:20:16 +0200
Subject: Netedelay affects UDP throughput?
Message-ID: <20110329232016.63cf21e7@joe.kifferkolonie>


Hi there,
I'using 'tc qdisc add dev eth0 roonetem delay 250ms' on a Ubuntu
10.10 server box to emulata WAto test out a custom protocol. The
clienmachinis connected via Fast Ethernet with a switch in between.
Testing UDP throughpuwith iperf reveals thanetem delay affects UDP
throughput, which to my understanding shouldn'happen. Is this a known
issue? This was raised before
(https://lists.linux-foundation.org/pipermail/netem/2006-May/000917.html),
buwith no decisivoutcome...


Server: iperf -u -c clien-i1 -w4M -b90M
Client: iperf -s -u -U -i 1 -w 4M

Clienoutpuwith 'tc qdisc add dev eth0 root netem delay 250ms'
applied othserver side:

[ 15]  0.0- 1.0 sec  5.61 MBytes  47.0 Mbits/sec  0.016 ms 2897/ 6896
(42%)
[ 15]  1.0- 2.0 sec  5.61 MBytes  47.0 Mbits/sec  0.069 ms 3757/ 7757
(48%)
[ 15]  2.0- 3.0 sec  5.61 MBytes  47.0 Mbits/sec  0.012 ms 3881/ 7881
(49%)
[ 15]  3.0- 4.0 sec  5.61 MBytes  47.0 Mbits/sec  0.007 ms 3696/ 7696
(48%)
[ 15]  4.0- 5.0 sec  5.61 MBytes  47.0 Mbits/sec  0.012 ms 3696/ 7696
(48%)
[ 15]  5.0- 6.0 sec  5.61 MBytes  47.0 Mbits/sec  0.015 ms 3738/ 7738
(48%)
[ 15]  6.0- 7.0 sec  5.61 MBytes  47.0 Mbits/sec  0.003 ms 3696/ 7696
(48%)
[ 15]  7.0- 8.0 sec  5.61 MBytes  47.0 Mbits/sec  0.006 ms 3699/ 7699
(48%)
[ 15]  8.0- 9.0 sec  5.61 MBytes  47.0 Mbits/sec  0.012 ms 3725/ 7725
(48%)
[ 15]  9.0-10.0 sec  5.61 MBytes  47.0 Mbits/sec  0.008 ms 3697/ 7697
(48%)
[ 15]  0.0-10.3 sec  56.1 MBytes  45.9 Mbits/sec  15.656 ms 36884/76884
(48%)





Clienoutpu_without_ netem delay:

[ ID] Interval       Transfer     Bandwidth       Jitter   Lost/Total
Datagrams
[  1]  0.0- 1.0 sec  10.8 MBytes  90.5 Mbits/sec  0.012 ms    0/ 7693
(0%)
[  1]  1.0- 2.0 sec  10.8 MBytes  90.4 Mbits/sec  0.008 ms    0/ 7691
(0%)
[  1]  2.0- 3.0 sec  10.8 MBytes  90.5 Mbits/sec  0.009 ms    0/ 7692
(0%)
[  1]  3.0- 4.0 sec  10.8 MBytes  90.4 Mbits/sec  0.017 ms    0/ 7691
(0%)
[  1]  4.0- 5.0 sec  10.8 MBytes  90.5 Mbits/sec  0.012 ms    0/ 7692
(0%)
[  1]  5.0- 6.0 sec  10.8 MBytes  90.4 Mbits/sec  0.009 ms    0/ 7689
(0%)
[  1]  6.0- 7.0 sec  10.8 MBytes  90.5 Mbits/sec  0.023 ms    0/ 7693
(0%)
[  1]  7.0- 8.0 sec  10.8 MBytes  90.4 Mbits/sec  0.010 ms    0/ 7690
(0%)
[  1]  8.0- 9.0 sec  10.8 MBytes  90.5 Mbits/sec  0.012 ms    0/ 7692
(0%)
[  1]  9.0-10.0 sec  10.8 MBytes  90.4 Mbits/sec  0.073 ms    0/ 7688
(0%)
[  1]  0.0-10.0 sec    108 MBytes  90.4 Mbits/sec  0.044 ms    0/76920
(0%)
[  1]  0.0-10.0 sec  1 datagrams received out-of-order


Cheers,
   Christian

-- 
whais, is;
whais nois possible.


[Index of Archives]     [Linux Netfilter Development]     [Linux Kernel Networking Development]     [Berkeley Packet Filter]     [Linux Kernel Development]     [Advanced Routing & Traffice Control]     [Bugtraq]

  Powered by Linux