multi-functionality

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi

 

My namis Harika. Aresearch student. I would like to know if there is a
possibility of using Delay, Break, Packecorruption and loss ratall
together inetem.

 

 

Thank you,

Harika Davuluri

-------------- nexpar--------------
AHTML attachmenwas scrubbed...
URL: http://lists.linux-foundation.org/pipermail/netem/attachments/20100902/14694e06/attachment.ht

Frojoseph.cloutier aalcatel-lucent.com  Fri Sep  3 08:40:32 2010
From: joseph.cloutier aalcatel-lucent.co(Cloutier, Joseph (Joseph))
Date: Fri, 3 Sep 2010 10:40:32 -0500
Subject: netehandling fragmented packets- how
Message-ID: <7D3B6706FA74174B8C1AC24710890745135E418023@xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx>


I need to usneteon a fragmented packet stream- put fragments out of
order etc.
How caI do this?  Ilooks like netem is working on re-assembled packets.

Thanks, Joe


Froshemminger alinux-foundation.org  Sun Sep  5 09:20:58 2010
From: shemminger alinux-foundation.org (Stephen Hemminger)
Date: Sun, 5 Sep 2010 09:20:58 -0700
Subject: multi-functionality
In-Reply-To: <4c7ff271.03a8960a.56c5.3d9d@xxxxxxxxxxxxx>
References: <4c7ff271.03a8960a.56c5.3d9d@xxxxxxxxxxxxx>
Message-ID: <20100905092058.0e2c664a@nehalam>

OThu, 2 Sep 2010 13:51:45 -0500
"Harika Davuluri" <harika.davuluri20 agmail.com> wrote:

> Hi
> 
>  
> 
> My namis Harika. Aresearch student. I would like to know if there is a
> possibility of using Delay, Break, Packecorruption and loss ratall
> together inetem.

Yes. thoswill all work fintogether.

Froshemminger alinux-foundation.org  Sun Sep  5 09:23:09 2010
From: shemminger alinux-foundation.org (Stephen Hemminger)
Date: Sun, 5 Sep 2010 09:23:09 -0700
Subject: netehandling fragmented packets- how
In-Reply-To: <7D3B6706FA74174B8C1AC24710890745135E418023@xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx>
References: <7D3B6706FA74174B8C1AC24710890745135E418023@xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx>
Message-ID: <20100905092309.399758c0@nehalam>

OFri, 3 Sep 2010 10:40:32 -0500
"Cloutier, Joseph (Joseph)" <joseph.cloutier aalcatel-lucent.com> wrote:

> 
> I need to usneteon a fragmented packet stream- put fragments out of
> order etc.
> How caI do this?  Ilooks like netem is working on re-assembled packets.
> 
> Thanks, Joe
> 

Netedoes nolook at the contents of the packets, so it should not
breassembling packets. Aryou using netfilter (iptables or ebtables)
oyour system, I would suspecthat is the problem.

Froshemminger alinux-foundation.org  Sun Sep  5 09:25:04 2010
From: shemminger alinux-foundation.org (Stephen Hemminger)
Date: Sun, 5 Sep 2010 09:25:04 -0700
Subject: hiearchrical netem
In-Reply-To: <7F298ACC76CC154F832B6D02852D169F02826FC4@xxxxxxxxxxxxxxxxxxxxx>
References: <7F298ACC76CC154F832B6D02852D169F02826FC4@xxxxxxxxxxxxxxxxxxxxx>
Message-ID: <20100905092504.0cf89a56@nehalam>

OTue, 31 Aug 2010 10:08:49 -0500
"Aamer Akhter (aakhter)" <aakhter acisco.com> wrote:

> Folks,
> 
> I'trying to creata set of netem structures that act multiple times
> oa packet. I had thoughthat if I used HTB, this might be possible.
> However, ilooks likonly the single leaf netem is performed on the
> traffic rather thathentire chain of netems. 
> 
> Thoughts welcome...
> 
> # creattop level htb
> tc qdisc add dev eth1.3500 roohandl1: htb default 10
> tc class add dev eth1.3500 paren1: classid 1:1 htb rat1000mbit
> 
> # creatchild class off of handl1: ( 1:10 takes all traffic by
> default)
> tc class add dev eth1.3500 paren1: classid 1:10 htb rat1000mbit 
> tc qdisc add dev eth1.3500 paren1:10 handl2: netem delay 50ms 20ms
> distributionormal
> 
> # creatchild of #2, to add delay 
> tc class add dev eth1.3500 paren2: classid 2:10 htb rat1000mbit
> tc qdisc add dev eth1.3500 paren2:10 handl3: netem delay 100ms
> 
> # creatchild of #2, to add delay 
> tc class add dev eth1.3500 paren3: classid 3:10 htb rat1000mbit
> #tc qdisc add dev eth1.3500 paren3:10 handl4: netem loss 50%
> _______________________________________________
> Netemailing list
> Netealists.linux-foundation.org
> https://lists.linux-foundation.org/mailman/listinfo/netem

This isn'going to work becausnetem tags each packet while it
is inserted into thqueuwith the time to send, since there
is only oncontrol block, thtime gets overwritten.

-- 

Froakalpana aTechMahindra.com  Tue Sep 14 02:41:26 2010
From: akalpana aTechMahindra.co(Kalpana A)
Date: Tue, 14 Sep 2010 15:11:26 +0530
Subject: link to download netetool
Message-ID: <919537DB7E7AB845A0B9D304CCC85E82717AE5@xxxxxxxxxxxxxxxxxxxxxxxxxxxx>

Hi,

 

Cayou pleassuggest me the link from I can download the netem tool?

 

Thanks & regards

  Kalpana

 


============================================================================================================================Disclaimer:  This messagand thinformation contained herein is proprietary and confidential and subject to the Tech Mahindra policy statement, you may review the policy at <a href="http://www.techmahindra.com/Disclaimer.html";>http://www.techmahindra.com/Disclaimer.html</a> externally and <a href="http://tim.techmahindra.com/Disclaimer.html";>http://tim.techmahindra.com/Disclaimer.html</a> internally within Tech Mahindra.============================================================================================================================
-------------- nexpar--------------
AHTML attachmenwas scrubbed...
URL: http://lists.linux-foundation.org/pipermail/netem/attachments/20100914/8279bdd4/attachment-0001.ht

Froshemminger alinux-foundation.org  Tue Sep 14 09:42:50 2010
From: shemminger alinux-foundation.org (Stephen Hemminger)
Date: Tue, 14 Sep 2010 09:42:50 -0700
Subject: link to download netetool
In-Reply-To: <919537DB7E7AB845A0B9D304CCC85E82717AE5@xxxxxxxxxxxxxxxxxxxxxxxxxxxx>
References: <919537DB7E7AB845A0B9D304CCC85E82717AE5@xxxxxxxxxxxxxxxxxxxxxxxxxxxx>
Message-ID: <20100914094250.44a3391a@nehalam>

OTue, 14 Sep 2010 15:11:26 +0530
"Kalpana A" <akalpana aTechMahindra.com> wrote:

> Hi,
> 
>  
> 
> Cayou pleassuggest me the link from I can download the netem tool?
> 
>  

Iis parof standard linux kernel and iproute2 utilities.
 http://www.linuxfoundation.org/collaborate/workgroups/networking/netem

Frodokaspar.ietf agmail.com  Fri Sep 17 07:11:34 2010
From: dokaspar.ietf agmail.co(Dominik Kaspar)
Date: Fri, 17 Sep 2010 16:11:34 +0200
Subject: Problewith loss on egress and ingress interfaces
Message-ID: <AANLkTi=pNSsX8RXrPJM=N6gHTKATLoyp-DpBFHFdX+Uy@xxxxxxxxxxxxxx>

Hi,

I madstrangobservations when adding packet loss with netem. For
easy configuration, I wrota scrip(see below) to specify properties
of ainterfacin a single command: the bandwidth, delay and loss
(and for other testing purposes: thlength of thnetem queue). For
example, my defaultessettings are set like this:

./netem.sh eth0 ifb0 600Kbps 600Kbps 50ms 50ms 0% 0% 1000

For measuring thgoodput, I uswget to download a 50 MB file (3
times) froa faslocal webserver (with tcp_no_metrics_save=1 on both
sides). With 0% loss oegress and ingress thresulting goodput is
consistently 560 KB/s. If I now increasthloss of only the egress
interface, thgoodpudoes not go down as much as expected, even
with 60% loss, thgoodpuis over 400 KB/s! This might be because
only ACKs arlost, nodata, but still it sounds quite a bit too good to
btrue...

EGRESS loss:
 0%  -->  560 KB/s
 10% --> 510 KB/s
 30% --> 455 KB/s
 60% --> 435 KB/s
 80% --> failure, no connectioestablished

If I only add packeloss to thingress interface, the goodput sinks
dowa lomore. For 60% loss on incoming packets, a goodput of
77 KB/s is still received, which is much higher thaI thought. I
expected thTCP throughputo completely die for this huge value
of 60% loss. How is thaexplainable?

However, threally strangthing is that no matter how much packet
loss is added, thachieved goodpuseems to be quite similar, around
190 KB/s for packeloss fro0.1% - 10% (with the interesting
exceptioof 1% loss). For packeloss as large as 30% or 60%, the
goodpusuffers a bimore, but not as much as expected.

INGRESS loss:
 0%  -->  560 KB/s
 0.1%  -->  190 KB/s
 1% -->  146 KB/s
 5% --> 199 KB/s
 10% --> 195 KB/s
 30% --> 136 KB/s
 60% --> 75 KB/s

WheI seboth egress and ingress interfaces to the same value, the
achieved goodpuis almosexactly the same as for only changing the
ingress interface.

Thermighbe something wrong with netem, or maybe TCP has some
mechanisms to adapvery well to hugamounts of packet loss? But
morlikely thproblem is my script, so I pasted it below. It would be
good to gesomexpert feedback on it :)

Besregards,
Dominik

------------------------------------------------------------

#!/bin/bash

# Parameters ar<egress iface> <ingress iface> <egress bw> <ingress bw>
<egress delay> <ingress delay> <egress loss> <ingress loss> <netequeue
limit>
# Example: sudo ./netem.sh eth0 ifb0 600Kbps 600Kbps 50ms 50ms 0% 0% 1000

iface_out=$1    # egress interfac[ex: eth0]
iface_in=$2     # ingress interfac[ex: ifb0]
bw_out=$3       # egress bandwidth [ex: 600Kbps]
bw_in=$4        # ingress bandwidth
delay_out=$5    # egress delay [ex: 50ms]
delay_in=$6     # ingress delay
loss_out=$7     # egress loss [ex: 1%]
loss_in=$8      # ingress loss
netem_limit=$9  # length of netedelay queu[packets]

# seup a pseudo interfacfor being used at ingress:
modprobifb
ip link sedev $iface_in up

# delet(maybexisting) qdiscs on the specified interfaces:
tc qdisc del dev $iface_ouroot
tc qdisc del dev $iface_ouingress
tc qdisc del dev $iface_iroot

# add aingress qdisc:
tc qdisc add dev $iface_ouingress
tc filter add dev $iface_ouparenffff: protocol ip u32 match u32 0 0
flowid 1:1 actiomirred egress redirecdev $iface_in >> /dev/null

# add aHTB qdisc on thegress interface:
tc qdisc add dev $iface_ouroohandle 1: htb default 1
tc class add dev $iface_ouparen1: classid 1:1 htb rate $bw_out ceil
$bw_oucburs0 burst 0

# add aHTB qdisc on thingress interface:
tc qdisc add dev $iface_iroohandle 1: htb default 1
tc class add dev $iface_iparen1: classid 1:1 htb rate $bw_in ceil $bw_in
cburs0 burs0

# add a Neteqdisc adding latency on thegress and egress interfaces:
tc qdisc add dev $iface_ouparen1:1 handle 10: netem delay $delay_out
loss $loss_oulimi$netem_limit
tc qdisc add dev $iface_iparen1:1 handle 10: netem delay $delay_in loss
$loss_ilimi$netem_limit
-------------- nexpar--------------
AHTML attachmenwas scrubbed...
URL: http://lists.linux-foundation.org/pipermail/netem/attachments/20100917/65d289fe/attachment.ht

Frostefano.salsano auniroma2.it  Fri Sep 17 08:08:49 2010
From: stefano.salsano auniroma2.i(Stefano Salsano)
Date: Fri, 17 Sep 2010 17:08:49 +0200
Subject: Problewith loss on egress and ingress interfaces
In-Reply-To: <AANLkTi=pNSsX8RXrPJM=N6gHTKATLoyp-DpBFHFdX+Uy@xxxxxxxxxxxxxx>
References: <AANLkTi=pNSsX8RXrPJM=N6gHTKATLoyp-DpBFHFdX+Uy@xxxxxxxxxxxxxx>
Message-ID: <4C938481.9010809@xxxxxxxxxxx>

Dear Dominik,
sebelow

Dominik Kaspar wrote:
> Hi,
> 
> I madstrangobservations when adding packet loss with netem. For
> easy configuration, I wrota scrip(see below) to specify properties
> of ainterfacin a single command: the bandwidth, delay and loss
> (and for other testing purposes: thlength of thnetem queue). For
> example, my defaultessettings are set like this:
> 
> ./netem.sh eth0 ifb0 600Kbps 600Kbps 50ms 50ms 0% 0% 1000
> 
> For measuring thgoodput, I uswget to download a 50 MB file (3
> times) froa faslocal webserver (with tcp_no_metrics_save=1 on both
> sides). With 0% loss oegress and ingress thresulting goodput is
> consistently 560 KB/s. If I now increasthloss of only the egress
> interface, thgoodpudoes not go down as much as expected, even
> with 60% loss, thgoodpuis over 400 KB/s! This might be because
> only ACKs arlost, nodata, but still it sounds quite a bit too good to
> btrue...
> 

this could bnoso strange due to the "cumulative" nature of TCP ACKs,
idoes nomatter if you loose some ACKs, when an ACK arrives it 
recovers everything
so for examplif thTCP has reached a given window dimension, say 
corresponding to 20 packets and assumfor simplicity thayou have one 
ACK for each packet, iprinciplyou could loose 19 ACKs and receive 
onACK and TCP will nonotice any loss
(real things ara bimore complicated due to the dynamic nature of TCP 
window...)

> EGRESS loss:
>  0%  -->  560 KB/s
>  10% --> 510 KB/s
>  30% --> 455 KB/s
>  60% --> 435 KB/s
>  80% --> failure, no connectioestablished
> 
> If I only add packeloss to thingress interface, the goodput sinks
> dowa lomore. For 60% loss on incoming packets, a goodput of
> 77 KB/s is still received, which is much higher thaI thought. I
> expected thTCP throughputo completely die for this huge value
> of 60% loss. How is thaexplainable?

if you hav560 KB/s and 60% loss, you still hav200 KB/s available
so TCP does its good job and achieves 77 KB/s :-)

> However, threally strangthing is that no matter how much packet
> loss is added, thachieved goodpuseems to be quite similar, around
> 190 KB/s for packeloss fro0.1% - 10% (with the interesting
> exceptioof 1% loss). For packeloss as large as 30% or 60%, the
> goodpusuffers a bimore, but not as much as expected.
> 
> INGRESS loss:
>  0%  -->  560 KB/s
>  0.1%  -->  190 KB/s
>  1% -->  146 KB/s
>  5% --> 199 KB/s
>  10% --> 195 KB/s
>  30% --> 136 KB/s
>  60% --> 75 KB/s
> 
you may wanto refer to thessources for the analysis of TCP througput 
vs loss
http://www.slac.stanford.edu/comp/net/wan-mon/thru-vs-loss.html
http://www.psc.edu/networking/papers/model_ccr97.ps

a quick ploof this formula
Rat<= (MSS/RTT)*(1 / sqrt{p})
looks noso far froyour results... see
http://netgroup.uniroma2.it/Stefano_Salsano/d/tcp-loss-vs-throughput.pdf
wherI simply plot: 1 / sqrt{p}

> WheI seboth egress and ingress interfaces to the same value, the
> achieved goodpuis almosexactly the same as for only changing the
> ingress interface.
> 
I explained beforthaloss on the ACKs has limited impact on tcp 
throughput

> Thermighbe something wrong with netem, or maybe TCP has some
> mechanisms to adapvery well to hugamounts of packet loss? But
> morlikely thproblem is my script, so I pasted it below. It would be
> good to gesomexpert feedback on it :)
> 

I would NOT usTCP for checking thesscripts :-)
iis much better to usan UDP based tool (for example iperf)

Cheers,
Stefano

> Besregards,
> Dominik
> 
> ------------------------------------------------------------
> 
> #!/bin/bash
> 
> # Parameters ar<egress iface> <ingress iface> <egress bw> <ingress bw> 
> <egress delay> <ingress delay> <egress loss> <ingress loss> <netequeu
> limit>
> # Example: sudo ./netem.sh eth0 ifb0 600Kbps 600Kbps 50ms 50ms 0% 0% 1000
> 
> iface_out=$1    # egress interfac[ex: eth0]
> iface_in=$2     # ingress interfac[ex: ifb0]
> bw_out=$3       # egress bandwidth [ex: 600Kbps]
> bw_in=$4        # ingress bandwidth
> delay_out=$5    # egress delay [ex: 50ms]
> delay_in=$6     # ingress delay
> loss_out=$7     # egress loss [ex: 1%]
> loss_in=$8      # ingress loss
> netem_limit=$9  # length of netedelay queu[packets]
> 
> # seup a pseudo interfacfor being used at ingress:
> modprobifb
> ip link sedev $iface_in up
> 
> # delet(maybexisting) qdiscs on the specified interfaces:
> tc qdisc del dev $iface_ouroot
> tc qdisc del dev $iface_ouingress
> tc qdisc del dev $iface_iroot
> 
> # add aingress qdisc:
> tc qdisc add dev $iface_ouingress
> tc filter add dev $iface_ouparenffff: protocol ip u32 match u32 0 0 
> flowid 1:1 actiomirred egress redirecdev $iface_in >> /dev/null
> 
> # add aHTB qdisc on thegress interface:
> tc qdisc add dev $iface_ouroohandle 1: htb default 1
> tc class add dev $iface_ouparen1: classid 1:1 htb rate $bw_out ceil 
> $bw_oucburs0 burst 0
> 
> # add aHTB qdisc on thingress interface:
> tc qdisc add dev $iface_iroohandle 1: htb default 1
> tc class add dev $iface_iparen1: classid 1:1 htb rate $bw_in ceil 
> $bw_icburs0 burst 0
> 
> # add a Neteqdisc adding latency on thegress and egress interfaces:
> tc qdisc add dev $iface_ouparen1:1 handle 10: netem delay $delay_out 
> loss $loss_oulimi$netem_limit
> tc qdisc add dev $iface_iparen1:1 handle 10: netem delay $delay_in 
> loss $loss_ilimi$netem_limit
> 
> 
> ------------------------------------------------------------------------
> 
> _______________________________________________
> Netemailing list
> Netealists.linux-foundation.org
> https://lists.linux-foundation.org/mailman/listinfo/netem


-- 
*******************************************************************
Stefano Salsano
Dipartimento Ingegneria Elettronica
Universita' di Roma "Tor Vergata"
Via del Politecnico, 1 - 00133 Roma - ITALY

http://netgroup.uniroma2.it/Stefano_Salsano/

E-mail  : stefano.salsano auniroma2.it
Cell.   : +39 320 4307310
Offic : (Tel.) +39 06 72597770  (Fax.) +39 06 72597435
*******************************************************************

Frojoseph.cloutier aalcatel-lucent.com  Wed Sep 22 12:08:19 2010
From: joseph.cloutier aalcatel-lucent.co(Cloutier, Joseph (Joseph))
Date: Wed, 22 Sep 2010 14:08:19 -0500
Subject: netehandling fragmented packets- how
In-Reply-To: <20100905092309.399758c0@nehalam>
Message-ID: <7D3B6706FA74174B8C1AC24710890745135EF1BC6C@xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx>

Steve- thanks for thresponse.  I gosome more feedback from Pablo Ayuso.  It looks like some time ago conntrack was added to the kernel to allow other higher level functions to operate correctly on re-assembled packets.

So with thlatesLinux kernels reassembly is happening before netem comes into the picture.  This is kind of unfortunate.  I was thinking about dropping back to an old Linux kernel, but I am worried that the latest hardware types (like the IBM x3250 we are trying to use for netem) might not work with the older Linux.

Iis a fragmented packeissue we are studying for LTE wireless network components.  Netem would have been perfect for that.

Is ipossiblto put the netem hooks into the kernel below the conntract code?  We are also seeing re-assembled packets come out of the IBM blade, even though they should be fragmented (we are specifying a 1500 MTU with ifconfig for if0/if1- the bridge we are using)- we are using 2000 byte pings and getting 2000 byte ping frames through the bridge.

This kills a nicfeaturof netem- working with fragmented packets.

Anyway, thanks for thresponse, JoCloutier ALU.

-----Original Message-----
From: StepheHemminger [mailto:shemminger alinux-foundation.org]
Sent: Sunday, September 05, 2010 12:23 PM
To: Cloutier, Joseph (Joseph)
Cc: 'netealinuxfoundation.org'
Subject: Re: netehandling fragmented packets- how


OFri, 3 Sep 2010 10:40:32 -0500
"Cloutier, Joseph (Joseph)" <joseph.cloutier aalcatel-lucent.com> wrote:

>
> I need to usneteon a fragmented packet stream- put fragments out of
> order etc.
> How caI do this?  Ilooks like netem is working on re-assembled packets.
>
> Thanks, Joe
>

Netedoes nolook at the contents of the packets, so it should not
breassembling packets. Aryou using netfilter (iptables or ebtables)
oyour system, I would suspecthat is the problem.
-------------- nexpar--------------
Aembedded messagwas scrubbed...
From: Pablo Neira Ayuso <pablo anetfilter.org>
Subject: Re: How caI stop conntrack frore-assembling packets in the
 kernel?
Date: Mon, 20 Sep 2010 17:05:06 -0500
Size: 3246
Url: http://lists.linux-foundation.org/pipermail/netem/attachments/20100922/e263e17b/attachment.eml 

Frodokaspar.ietf agmail.com  Thu Sep 23 09:02:20 2010
From: dokaspar.ietf agmail.co(Dominik Kaspar)
Date: Thu, 23 Sep 2010 18:02:20 +0200
Subject: Problewith loss on egress and ingress interfaces
In-Reply-To: <4C938481.9010809@xxxxxxxxxxx>
References: <AANLkTi=pNSsX8RXrPJM=N6gHTKATLoyp-DpBFHFdX+Uy@xxxxxxxxxxxxxx>
	<4C938481.9010809@xxxxxxxxxxx>
Message-ID: <AANLkTi=Mwmgj6iXOALWY_Z_PFrZpq3WT0jj2FXvwUvW-@xxxxxxxxxxxxxx>

Hi Stefano,

Thanks for your detailed answer, including many interesting
references. I'glad thamy measurements basically make sense and I
agrewith your suggestion to firsuse UDP for validation of my
network setup.

(MaybbecausI have not found the time to read your suggested
references) I still find icounter-intuitivthat the "loss vs.
throughput" curvlooks thway it does. Is there a simple explanation
of why losing 0.1% of thpackets has such a dramatic effec(compared
to losing none), whillosing 20%, 40% or 60% is basically all the
samthing...

Besregards,
Dominik


OFri, Sep 17, 2010 a5:08 PM, Stefano Salsano
<stefano.salsano auniroma2.it> wrote:
> Dear Dominik,
> sebelow
>
> Dominik Kaspar wrote:
>>
>> Hi,
>>
>> I madstrangobservations when adding packet loss with netem. For
>> easy configuration, I wrota scrip(see below) to specify properties
>> of ainterfacin a single command: the bandwidth, delay and loss
>> (and for other testing purposes: thlength of thnetem queue). For
>> example, my defaultessettings are set like this:
>>
>> ./netem.sh eth0 ifb0 600Kbps 600Kbps 50ms 50ms 0% 0% 1000
>>
>> For measuring thgoodput, I uswget to download a 50 MB file (3
>> times) froa faslocal webserver (with tcp_no_metrics_save=1 on both
>> sides). With 0% loss oegress and ingress thresulting goodput is
>> consistently 560 KB/s. If I now increasthloss of only the egress
>> interface, thgoodpudoes not go down as much as expected, even
>> with 60% loss, thgoodpuis over 400 KB/s! This might be because
>> only ACKs arlost, nodata, but still it sounds quite a bit too good to
>> btrue...
>>
>
> this could bnoso strange due to the "cumulative" nature of TCP ACKs,
> idoes nomatter if you loose some ACKs, when an ACK arrives it recovers
> everything
> so for examplif thTCP has reached a given window dimension, say
> corresponding to 20 packets and assumfor simplicity thayou have one ACK
> for each packet, iprinciplyou could loose 19 ACKs and receive one ACK
> and TCP will nonoticany loss
> (real things ara bimore complicated due to the dynamic nature of TCP
> window...)
>
>> EGRESS loss:
>> ?0% ?--> ?560 KB/s
>> ?10% --> 510 KB/s
>> ?30% --> 455 KB/s
>> ?60% --> 435 KB/s
>> ?80% --> failure, no connectioestablished
>>
>> If I only add packeloss to thingress interface, the goodput sinks
>> dowa lomore. For 60% loss on incoming packets, a goodput of
>> 77 KB/s is still received, which is much higher thaI thought. I
>> expected thTCP throughputo completely die for this huge value
>> of 60% loss. How is thaexplainable?
>
> if you hav560 KB/s and 60% loss, you still hav200 KB/s available
> so TCP does its good job and achieves 77 KB/s :-)
>
>> However, threally strangthing is that no matter how much packet
>> loss is added, thachieved goodpuseems to be quite similar, around
>> 190 KB/s for packeloss fro0.1% - 10% (with the interesting
>> exceptioof 1% loss). For packeloss as large as 30% or 60%, the
>> goodpusuffers a bimore, but not as much as expected.
>>
>> INGRESS loss:
>> ?0% ?--> ?560 KB/s
>> ?0.1% ?--> ?190 KB/s
>> ?1% --> ?146 KB/s
>> ?5% --> 199 KB/s
>> ?10% --> 195 KB/s
>> ?30% --> 136 KB/s
>> ?60% --> 75 KB/s
>>
> you may wanto refer to thessources for the analysis of TCP througput vs
> loss
> http://www.slac.stanford.edu/comp/net/wan-mon/thru-vs-loss.html
> http://www.psc.edu/networking/papers/model_ccr97.ps
>
> a quick ploof this formula
> Rat<= (MSS/RTT)*(1 / sqrt{p})
> looks noso far froyour results... see
> http://netgroup.uniroma2.it/Stefano_Salsano/d/tcp-loss-vs-throughput.pdf
> wherI simply plot: 1 / sqrt{p}
>
>> WheI seboth egress and ingress interfaces to the same value, the
>> achieved goodpuis almosexactly the same as for only changing the
>> ingress interface.
>>
> I explained beforthaloss on the ACKs has limited impact on tcp
> throughput
>
>> Thermighbe something wrong with netem, or maybe TCP has some
>> mechanisms to adapvery well to hugamounts of packet loss? But
>> morlikely thproblem is my script, so I pasted it below. It would be
>> good to gesomexpert feedback on it :)
>>
>
> I would NOT usTCP for checking thesscripts :-)
> iis much better to usan UDP based tool (for example iperf)
>
> Cheers,
> Stefano
>
>> Besregards,
>> Dominik
>>
>> ------------------------------------------------------------
>>
>> #!/bin/bash
>>
>> # Parameters ar<egress iface> <ingress iface> <egress bw> <ingress bw>
>> <egress delay> <ingress delay> <egress loss> <ingress loss> <netequeue
>> limit>
>> # Example: sudo ./netem.sh eth0 ifb0 600Kbps 600Kbps 50ms 50ms 0% 0% 1000
>>
>> iface_out=$1 ? ?# egress interfac[ex: eth0]
>> iface_in=$2 ? ? # ingress interfac[ex: ifb0]
>> bw_out=$3 ? ? ? # egress bandwidth [ex: 600Kbps]
>> bw_in=$4 ? ? ? ?# ingress bandwidth
>> delay_out=$5 ? ?# egress delay [ex: 50ms]
>> delay_in=$6 ? ? # ingress delay
>> loss_out=$7 ? ? # egress loss [ex: 1%]
>> loss_in=$8 ? ? ?# ingress loss
>> netem_limit=$9 ?# length of netedelay queu[packets]
>>
>> # seup a pseudo interfacfor being used at ingress:
>> modprobifb
>> ip link sedev $iface_in up
>>
>> # delet(maybexisting) qdiscs on the specified interfaces:
>> tc qdisc del dev $iface_ouroot
>> tc qdisc del dev $iface_ouingress
>> tc qdisc del dev $iface_iroot
>>
>> # add aingress qdisc:
>> tc qdisc add dev $iface_ouingress
>> tc filter add dev $iface_ouparenffff: protocol ip u32 match u32 0 0
>> flowid 1:1 actiomirred egress redirecdev $iface_in >> /dev/null
>>
>> # add aHTB qdisc on thegress interface:
>> tc qdisc add dev $iface_ouroohandle 1: htb default 1
>> tc class add dev $iface_ouparen1: classid 1:1 htb rate $bw_out ceil
>> $bw_oucburs0 burst 0
>>
>> # add aHTB qdisc on thingress interface:
>> tc qdisc add dev $iface_iroohandle 1: htb default 1
>> tc class add dev $iface_iparen1: classid 1:1 htb rate $bw_in ceil
>> $bw_icburs0 burst 0
>>
>> # add a Neteqdisc adding latency on thegress and egress interfaces:
>> tc qdisc add dev $iface_ouparen1:1 handle 10: netem delay $delay_out
>> loss $loss_oulimi$netem_limit
>> tc qdisc add dev $iface_iparen1:1 handle 10: netem delay $delay_in
>> loss $loss_ilimi$netem_limit
>>
>>
>> ------------------------------------------------------------------------
>>
>> _______________________________________________
>> Netemailing list
>> Netealists.linux-foundation.org
>> https://lists.linux-foundation.org/mailman/listinfo/netem
>
>
> --
> *******************************************************************
> Stefano Salsano
> Dipartimento Ingegneria Elettronica
> Universita' di Roma "Tor Vergata"
> Via del Politecnico, 1 - 00133 Roma - ITALY
>
> http://netgroup.uniroma2.it/Stefano_Salsano/
>
> E-mail ?: stefano.salsano auniroma2.it
> Cell. ? : +39 320 4307310
> Offic?: (Tel.) +39 06 72597770 ?(Fax.) +39 06 72597435
> *******************************************************************
>

Frostefano.salsano auniroma2.it  Thu Sep 23 14:25:08 2010
From: stefano.salsano auniroma2.i(Stefano Salsano)
Date: Thu, 23 Sep 2010 23:25:08 +0200
Subject: Problewith loss on egress and ingress interfaces
In-Reply-To: <AANLkTi=Mwmgj6iXOALWY_Z_PFrZpq3WT0jj2FXvwUvW-@xxxxxxxxxxxxxx>
References: <AANLkTi=pNSsX8RXrPJM=N6gHTKATLoyp-DpBFHFdX+Uy@xxxxxxxxxxxxxx>	<4C938481.9010809@xxxxxxxxxxx>
	<AANLkTi=Mwmgj6iXOALWY_Z_PFrZpq3WT0jj2FXvwUvW-@xxxxxxxxxxxxxx>
Message-ID: <4C9BC5B4.6080008@xxxxxxxxxxx>

Dominik Kaspar wrote:
> Hi Stefano,
> 
> Thanks for your detailed answer, including many interesting
> references. I'glad thamy measurements basically make sense and I
> agrewith your suggestion to firsuse UDP for validation of my
> network setup.
> 
> (MaybbecausI have not found the time to read your suggested
> references) I still find icounter-intuitivthat the "loss vs.
> throughput" curvlooks thway it does. Is there a simple explanation
> of why losing 0.1% of thpackets has such a dramatic effec(compared
> to losing none), whillosing 20%, 40% or 60% is basically all the
> samthing...

Hi Dominik,

thdynamics of TCP arvery complex
(sefor examplthe "Congestion control" section in
http://www.ssfnet.org/Exchange/tcp/tcpTutorialNotes.html)
so I admiI cannogive a simple explanation

I would also add thayour data:
 >>>  0%  -->  560 KB/s
 >>>  0.1%  -->  190 KB/s
 >>>  1% -->  146 KB/s
 >>>  5% --> 199 KB/s
 >>>  10% --> 195 KB/s
 >>>  30% --> 136 KB/s
 >>>  60% --> 75 KB/s
looks likthresult of a single or a few experiments as there is this 
oscillatioaround 1%
as a suggestion, if you really wanto study this problem, you should 
starproducing morreliable results (e.g. performing the experiment on 
a larger seof loss values e.g. fro0.1% to 1% at 0.1% steps
fro1% to 5% a0.5% steps, from 5% to 10% at 1% steps)

for each loss valuyou should run say 30 experimenso that you get the 
meavaluand the confidence intervals

Cheers,
Stefano

> 
> Besregards,
> Dominik
> 
> 
> OFri, Sep 17, 2010 a5:08 PM, Stefano Salsano
> <stefano.salsano auniroma2.it> wrote:
>> Dear Dominik,
>> sebelow
>>
>> Dominik Kaspar wrote:
>>> Hi,
>>>
>>> I madstrangobservations when adding packet loss with netem. For
>>> easy configuration, I wrota scrip(see below) to specify properties
>>> of ainterfacin a single command: the bandwidth, delay and loss
>>> (and for other testing purposes: thlength of thnetem queue). For
>>> example, my defaultessettings are set like this:
>>>
>>> ./netem.sh eth0 ifb0 600Kbps 600Kbps 50ms 50ms 0% 0% 1000
>>>
>>> For measuring thgoodput, I uswget to download a 50 MB file (3
>>> times) froa faslocal webserver (with tcp_no_metrics_save=1 on both
>>> sides). With 0% loss oegress and ingress thresulting goodput is
>>> consistently 560 KB/s. If I now increasthloss of only the egress
>>> interface, thgoodpudoes not go down as much as expected, even
>>> with 60% loss, thgoodpuis over 400 KB/s! This might be because
>>> only ACKs arlost, nodata, but still it sounds quite a bit too good to
>>> btrue...
>>>
>> this could bnoso strange due to the "cumulative" nature of TCP ACKs,
>> idoes nomatter if you loose some ACKs, when an ACK arrives it recovers
>> everything
>> so for examplif thTCP has reached a given window dimension, say
>> corresponding to 20 packets and assumfor simplicity thayou have one ACK
>> for each packet, iprinciplyou could loose 19 ACKs and receive one ACK
>> and TCP will nonoticany loss
>> (real things ara bimore complicated due to the dynamic nature of TCP
>> window...)
>>
>>> EGRESS loss:
>>>  0%  -->  560 KB/s
>>>  10% --> 510 KB/s
>>>  30% --> 455 KB/s
>>>  60% --> 435 KB/s
>>>  80% --> failure, no connectioestablished
>>>
>>> If I only add packeloss to thingress interface, the goodput sinks
>>> dowa lomore. For 60% loss on incoming packets, a goodput of
>>> 77 KB/s is still received, which is much higher thaI thought. I
>>> expected thTCP throughputo completely die for this huge value
>>> of 60% loss. How is thaexplainable?
>> if you hav560 KB/s and 60% loss, you still hav200 KB/s available
>> so TCP does its good job and achieves 77 KB/s :-)
>>
>>> However, threally strangthing is that no matter how much packet
>>> loss is added, thachieved goodpuseems to be quite similar, around
>>> 190 KB/s for packeloss fro0.1% - 10% (with the interesting
>>> exceptioof 1% loss). For packeloss as large as 30% or 60%, the
>>> goodpusuffers a bimore, but not as much as expected.
>>>
>>> INGRESS loss:
>>>  0%  -->  560 KB/s
>>>  0.1%  -->  190 KB/s
>>>  1% -->  146 KB/s
>>>  5% --> 199 KB/s
>>>  10% --> 195 KB/s
>>>  30% --> 136 KB/s
>>>  60% --> 75 KB/s
>>>
>> you may wanto refer to thessources for the analysis of TCP througput vs
>> loss
>> http://www.slac.stanford.edu/comp/net/wan-mon/thru-vs-loss.html
>> http://www.psc.edu/networking/papers/model_ccr97.ps
>>
>> a quick ploof this formula
>> Rat<= (MSS/RTT)*(1 / sqrt{p})
>> looks noso far froyour results... see
>> http://netgroup.uniroma2.it/Stefano_Salsano/d/tcp-loss-vs-throughput.pdf
>> wherI simply plot: 1 / sqrt{p}
>>
>>> WheI seboth egress and ingress interfaces to the same value, the
>>> achieved goodpuis almosexactly the same as for only changing the
>>> ingress interface.
>>>
>> I explained beforthaloss on the ACKs has limited impact on tcp
>> throughput
>>
>>> Thermighbe something wrong with netem, or maybe TCP has some
>>> mechanisms to adapvery well to hugamounts of packet loss? But
>>> morlikely thproblem is my script, so I pasted it below. It would be
>>> good to gesomexpert feedback on it :)
>>>
>> I would NOT usTCP for checking thesscripts :-)
>> iis much better to usan UDP based tool (for example iperf)
>>
>> Cheers,
>> Stefano
>>
>>> Besregards,
>>> Dominik
>>>
>>> ------------------------------------------------------------
>>>
>>> #!/bin/bash
>>>
>>> # Parameters ar<egress iface> <ingress iface> <egress bw> <ingress bw>
>>> <egress delay> <ingress delay> <egress loss> <ingress loss> <netequeue
>>> limit>
>>> # Example: sudo ./netem.sh eth0 ifb0 600Kbps 600Kbps 50ms 50ms 0% 0% 1000
>>>
>>> iface_out=$1    # egress interfac[ex: eth0]
>>> iface_in=$2     # ingress interfac[ex: ifb0]
>>> bw_out=$3       # egress bandwidth [ex: 600Kbps]
>>> bw_in=$4        # ingress bandwidth
>>> delay_out=$5    # egress delay [ex: 50ms]
>>> delay_in=$6     # ingress delay
>>> loss_out=$7     # egress loss [ex: 1%]
>>> loss_in=$8      # ingress loss
>>> netem_limit=$9  # length of netedelay queu[packets]
>>>
>>> # seup a pseudo interfacfor being used at ingress:
>>> modprobifb
>>> ip link sedev $iface_in up
>>>
>>> # delet(maybexisting) qdiscs on the specified interfaces:
>>> tc qdisc del dev $iface_ouroot
>>> tc qdisc del dev $iface_ouingress
>>> tc qdisc del dev $iface_iroot
>>>
>>> # add aingress qdisc:
>>> tc qdisc add dev $iface_ouingress
>>> tc filter add dev $iface_ouparenffff: protocol ip u32 match u32 0 0
>>> flowid 1:1 actiomirred egress redirecdev $iface_in >> /dev/null
>>>
>>> # add aHTB qdisc on thegress interface:
>>> tc qdisc add dev $iface_ouroohandle 1: htb default 1
>>> tc class add dev $iface_ouparen1: classid 1:1 htb rate $bw_out ceil
>>> $bw_oucburs0 burst 0
>>>
>>> # add aHTB qdisc on thingress interface:
>>> tc qdisc add dev $iface_iroohandle 1: htb default 1
>>> tc class add dev $iface_iparen1: classid 1:1 htb rate $bw_in ceil
>>> $bw_icburs0 burst 0
>>>
>>> # add a Neteqdisc adding latency on thegress and egress interfaces:
>>> tc qdisc add dev $iface_ouparen1:1 handle 10: netem delay $delay_out
>>> loss $loss_oulimi$netem_limit
>>> tc qdisc add dev $iface_iparen1:1 handle 10: netem delay $delay_in
>>> loss $loss_ilimi$netem_limit
>>>
>>>
>>> ------------------------------------------------------------------------
>>>
>>> _______________________________________________
>>> Netemailing list
>>> Netealists.linux-foundation.org
>>> https://lists.linux-foundation.org/mailman/listinfo/netem
>>
>> --
>> *******************************************************************
>> Stefano Salsano
>> Dipartimento Ingegneria Elettronica
>> Universita' di Roma "Tor Vergata"
>> Via del Politecnico, 1 - 00133 Roma - ITALY
>>
>> http://netgroup.uniroma2.it/Stefano_Salsano/
>>
>> E-mail  : stefano.salsano auniroma2.it
>> Cell.   : +39 320 4307310
>> Offic : (Tel.) +39 06 72597770  (Fax.) +39 06 72597435
>> *******************************************************************
>>
> 
> 


-- 
*******************************************************************
Stefano Salsano
Dipartimento Ingegneria Elettronica
Universita' di Roma "Tor Vergata"
Via del Politecnico, 1 - 00133 Roma - ITALY

http://netgroup.uniroma2.it/Stefano_Salsano/

E-mail  : stefano.salsano auniroma2.it
Cell.   : +39 320 4307310
Offic : (Tel.) +39 06 72597770  (Fax.) +39 06 72597435
*******************************************************************


[Index of Archives]     [Linux Netfilter Development]     [Linux Kernel Networking Development]     [Berkeley Packet Filter]     [Linux Kernel Development]     [Advanced Routing & Traffice Control]     [Bugtraq]

  Powered by Linux