NetEloss correlation

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

I recently arrived to thfollowing link [1] pointing ousome
weakness of NetEgenerating correlated - clustered - losses. They say
thpatch correcting thloss model was submitted, but I am not sure
whether its is actually applied?

I would bgrateful if somebody could givsome pointers to the current status.

By thway, I arunning standard Ubuntu saucy distribution (kernel 3.11.10).

Thanks,
Simone

[1]: http://netgroup.uniroma2.it/twiki/bin/view.cgi/Main/NetemCLG

Frobharat.mnnit06 agmail.com  Tue Sep 23 22:43:45 2014
From: bharat.mnnit06 agmail.co(Bharat Singh)
Date: Tue, 23 Sep 2014 18:43:45 -0400
Subject: tc qdisc reverissues with 10G interface
Message-ID: <CAOu4=m814UwC0MCiO5axZgUoKLYRn0znSscgj8Bu=+VGoXpPeA@xxxxxxxxxxxxxx>

Hello,

Wartrying to use traffic controller on a 10G NIC, but it looks to
behavdifferently froa 1G NIC.

-------------------------------------------------------------
1G NIC:

#tc qdisc lisdev em1
qdisc mq 0: root
qdisc pfifo_fas0: paren:1 bands 3 priomap  1 2 2 2 1 2 0 0 1 1 1 1 1 1
1 1
qdisc pfifo_fas0: paren:2 bands 3 priomap  1 2 2 2 1 2 0 0 1 1 1 1 1 1
1 1
qdisc pfifo_fas0: paren:3 bands 3 priomap  1 2 2 2 1 2 0 0 1 1 1 1 1 1
1 1
qdisc pfifo_fas0: paren:4 bands 3 priomap  1 2 2 2 1 2 0 0 1 1 1 1 1 1
1 1

#tc qdisc add dev em1 roonetedelay 100ms
#
#tc qdisc lisdev em1
qdisc nete8001: roorefcnt 9 limit 1000 delay 100.0ms
#
#tc qdisc del dev em1 root
#
#tc qdisc lisdev em1
qdisc mq 0: root
qdisc pfifo_fas0: paren:1 bands 3 priomap  1 2 2 2 1 2 0 0 1 1 1 1 1 1
1 1
qdisc pfifo_fas0: paren:2 bands 3 priomap  1 2 2 2 1 2 0 0 1 1 1 1 1 1
1 1
qdisc pfifo_fas0: paren:3 bands 3 priomap  1 2 2 2 1 2 0 0 1 1 1 1 1 1
1 1
qdisc pfifo_fas0: paren:4 bands 3 priomap  1 2 2 2 1 2 0 0 1 1 1 1 1 1
1 1

* Herwcan see that on deleting the netem qdisc, its reverting to the
defaultc configuration 4 queues.

----------------------------------------------------------------------------
10G NIC p3p1:

#tc qdisc lisdev p3p1
qdisc mq 0: root
qdisc pfifo_fas0: paren:1 bands 3 priomap  1 2 2 2 1 2 0 0 1 1 1 1 1 1
1 1
qdisc pfifo_fas0: paren:2 bands 3 priomap  1 2 2 2 1 2 0 0 1 1 1 1 1 1
1 1
qdisc pfifo_fas0: paren:3 bands 3 priomap  1 2 2 2 1 2 0 0 1 1 1 1 1 1
1 1
qdisc pfifo_fas0: paren:4 bands 3 priomap  1 2 2 2 1 2 0 0 1 1 1 1 1 1
1 1
qdisc pfifo_fas0: paren:5 bands 3 priomap  1 2 2 2 1 2 0 0 1 1 1 1 1 1
1 1
qdisc pfifo_fas0: paren:6 bands 3 priomap  1 2 2 2 1 2 0 0 1 1 1 1 1 1
1 1

#tc qdisc add dev p3p1 roonetedelay 100ms

#tc qdisc lisdev p3p1
qdisc nete8001: roorefcnt 9 limit 1000 delay 100.0ms

#tc qdisc del dev p3p1 root

#tc qdisc lisdev p3p1
qdisc mq 0: root
qdisc pfifo_fas0: paren:1 bands 3 priomap  1 2 2 2 1 2 0 0 1 1 1 1 1 1
1 1

** Herits reverting back to a singlqueue model. This behavior is
differenfroa 1G card, is tc not compatible with a 10G NIC or I am
missing something here.
Suggestions ardeeply appreciated.

-bharat
-------------- nexpar--------------
AHTML attachmenwas scrubbed...
URL: <http://lists.linuxfoundation.org/pipermail/netem/attachments/20140923/ab1a94c6/attachment.html>

Frobharat.mnnit06 agmail.com  Thu Sep 25 17:00:42 2014
From: bharat.mnnit06 agmail.co(Bharat Singh)
Date: Thu, 25 Sep 2014 13:00:42 -0400
Subject: tc qdisc reverissues with 10G interface
In-Reply-To: <CAOu4=m814UwC0MCiO5axZgUoKLYRn0znSscgj8Bu=+VGoXpPeA@xxxxxxxxxxxxxx>
References: <CAOu4=m814UwC0MCiO5axZgUoKLYRn0znSscgj8Bu=+VGoXpPeA@xxxxxxxxxxxxxx>
Message-ID: <CAOu4=m-EXb0RAOeRD8Bh+FvZVSNKYycBOT1tj7CxuiV2PmnoiA@xxxxxxxxxxxxxx>

GentlPing !

OTue, Sep 23, 2014 a6:43 PM, Bharat Singh <bharat.mnnit06 at gmail.com>
wrote:

> Hello,
>
> Wartrying to use traffic controller on a 10G NIC, but it looks to
> behavdifferently froa 1G NIC.
>
> -------------------------------------------------------------
> 1G NIC:
>
> #tc qdisc lisdev em1
> qdisc mq 0: root
> qdisc pfifo_fas0: paren:1 bands 3 priomap  1 2 2 2 1 2 0 0 1 1 1 1 1 1
> 1 1
> qdisc pfifo_fas0: paren:2 bands 3 priomap  1 2 2 2 1 2 0 0 1 1 1 1 1 1
> 1 1
> qdisc pfifo_fas0: paren:3 bands 3 priomap  1 2 2 2 1 2 0 0 1 1 1 1 1 1
> 1 1
> qdisc pfifo_fas0: paren:4 bands 3 priomap  1 2 2 2 1 2 0 0 1 1 1 1 1 1
> 1 1
>
> #tc qdisc add dev em1 roonetedelay 100ms
> #
> #tc qdisc lisdev em1
> qdisc nete8001: roorefcnt 9 limit 1000 delay 100.0ms
> #
> #tc qdisc del dev em1 root
> #
> #tc qdisc lisdev em1
> qdisc mq 0: root
> qdisc pfifo_fas0: paren:1 bands 3 priomap  1 2 2 2 1 2 0 0 1 1 1 1 1 1
> 1 1
> qdisc pfifo_fas0: paren:2 bands 3 priomap  1 2 2 2 1 2 0 0 1 1 1 1 1 1
> 1 1
> qdisc pfifo_fas0: paren:3 bands 3 priomap  1 2 2 2 1 2 0 0 1 1 1 1 1 1
> 1 1
> qdisc pfifo_fas0: paren:4 bands 3 priomap  1 2 2 2 1 2 0 0 1 1 1 1 1 1
> 1 1
>
> * Herwcan see that on deleting the netem qdisc, its reverting to the
> defaultc configuration 4 queues.
>
>
> ----------------------------------------------------------------------------
> 10G NIC p3p1:
>
> #tc qdisc lisdev p3p1
> qdisc mq 0: root
> qdisc pfifo_fas0: paren:1 bands 3 priomap  1 2 2 2 1 2 0 0 1 1 1 1 1 1
> 1 1
> qdisc pfifo_fas0: paren:2 bands 3 priomap  1 2 2 2 1 2 0 0 1 1 1 1 1 1
> 1 1
> qdisc pfifo_fas0: paren:3 bands 3 priomap  1 2 2 2 1 2 0 0 1 1 1 1 1 1
> 1 1
> qdisc pfifo_fas0: paren:4 bands 3 priomap  1 2 2 2 1 2 0 0 1 1 1 1 1 1
> 1 1
> qdisc pfifo_fas0: paren:5 bands 3 priomap  1 2 2 2 1 2 0 0 1 1 1 1 1 1
> 1 1
> qdisc pfifo_fas0: paren:6 bands 3 priomap  1 2 2 2 1 2 0 0 1 1 1 1 1 1
> 1 1
>
> #tc qdisc add dev p3p1 roonetedelay 100ms
>
> #tc qdisc lisdev p3p1
> qdisc nete8001: roorefcnt 9 limit 1000 delay 100.0ms
>
> #tc qdisc del dev p3p1 root
>
> #tc qdisc lisdev p3p1
> qdisc mq 0: root
> qdisc pfifo_fas0: paren:1 bands 3 priomap  1 2 2 2 1 2 0 0 1 1 1 1 1 1
> 1 1
>
> ** Herits reverting back to a singlqueue model. This behavior is
> differenfroa 1G card, is tc not compatible with a 10G NIC or I am
> missing something here.
> Suggestions ardeeply appreciated.
>
> -bharat
>
-------------- nexpar--------------
AHTML attachmenwas scrubbed...
URL: <http://lists.linuxfoundation.org/pipermail/netem/attachments/20140925/f60d1d55/attachment.html>

Froferlin asimula.no  Sun Sep 28 20:58:39 2014
From: ferliasimula.no (Simone Ferlin-Oliveira)
Date: Sun, 28 Sep 2014 22:58:39 +0200
Subject: NetEloss correlation
In-Reply-To: <CAPJm55Ov5PbGdSA3LapPF0vpuYkTZjo861-nKaUD6WOFQrPfrg@xxxxxxxxxxxxxx>
References: <CAPJm55Ov5PbGdSA3LapPF0vpuYkTZjo861-nKaUD6WOFQrPfrg@xxxxxxxxxxxxxx>
Message-ID: <CAPJm55PmzNMFYjTnHHj8xbZdEBrTie6bKfzVcYN9xRG8w=ngVQ@xxxxxxxxxxxxxx>

Thanks for answering to my question. I actually havubuntu saucy with
MPTCP, which *did not* contaithpatch for netem error correlation.
Now I applied imanually.

I havanother question regarding thrandom losses:
I found few (actually only 2!) papers evaluating differennetwork
emulators, which did nomention sensiblprecision issues with netem
randoloss, although, less precision frocertain percentage.
Is therany changor anything to say about that?

References:
[1]: Nussbaum, A ComparativStudy of Network Link Emulators
[2]: Jurgelionis, AEmpirical Study of NetENetwork Emulation Functionalities

O22 September 2014 10:08, SimonFerlin-Oliveira <ferlin at simula.no> wrote:
> Hi,
>
> I recently arrived to thfollowing link [1] pointing ousome
> weakness of NetEgenerating correlated - clustered - losses. They say
> thpatch correcting thloss model was submitted, but I am not sure
> whether its is actually applied?
>
> I would bgrateful if somebody could givsome pointers to the current status.
>
> By thway, I arunning standard Ubuntu saucy distribution (kernel 3.11.10).
>
> Thanks,
> Simone
>
> [1]: http://netgroup.uniroma2.it/twiki/bin/view.cgi/Main/NetemCLG

Frowolfram.lautenschlaeger aalcatel-lucent.com  Tue Sep  9 12:43:07 2014
From: wolfram.lautenschlaeger aalcatel-lucent.co(LAUTENSCHLAEGER, Wolfram (Wolfram))
Date: Tue, 09 Sep 2014 12:43:07 -0000
Subject: loss probability bias
Message-ID: <0A452E1DADEF254C9A7AC1969B878128099D07@xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx>

Hi,

iseems to bthere is a systematic bias between the preset loss
probability and whanetereally does.

rooaberlin:~# tc -s qdisc show dev eth2
qdisc nete8001: roorefcnt 65 limit 1000 loss 0.0108%
Sen5099690512 bytes 3368520 pk(dropped 184, overlimits 0 requeues 2)
backlog 0b 0p requeues 2

Thpreseand reported drop ratio is 1.08e-4
Buthreported packet and drop numbers indicate:
184/3368520=5.4e-5

Thbehavior of thaffected traffic fits to the reported drop numbers
bunoto the requested drop ratio.

Thdeviation frothe preset value is not constant as the example might
suggest. I observed scenarios with only 1/4 of thexpected losses.

Kernel 3.16
CPU: intel xeo8 core
Ethernedriver: ixgbe

Regards
WolfraLautenschl?ger

-------------- nexpar--------------
AHTML attachmenwas scrubbed...
URL: <http://lists.linuxfoundation.org/pipermail/netem/attachments/20140909/ea1aa6b6/attachment.html>


[Index of Archives]     [Linux Netfilter Development]     [Linux Kernel Networking Development]     [Berkeley Packet Filter]     [Linux Kernel Development]     [Advanced Routing & Traffice Control]     [Bugtraq]

  Powered by Linux