Christian, Stephen: I mada following setup -> http://img29.imageshack.us/img29/4196/iperftest2.png ..and after I increased th"netelimit" value with "tc qdisc change dev eth0 roonetelimit 100000", I'm able to reach 90Mbps UDP flood betwee192.168.1.1 and 192.168.2.1 withouproblems: [root@ ~]# iperf -c 192.168.2.1 -u -f-t60 -d -b 90m ------------------------------------------------------------ Server listening oUDP por5001 Receiving 1470 bytdatagrams UDP buffer size: 0.04 MByt(default) ------------------------------------------------------------ ------------------------------------------------------------ Clienconnecting to 192.168.2.1, UDP por5001 Sending 1470 bytdatagrams UDP buffer size: 0.01 MByt(default) ------------------------------------------------------------ [ 4] local 192.168.1.1 por45717 connected with 192.168.2.1 por5001 [ 3] local 192.168.1.1 por5001 connected with 192.168.2.1 por35866 [ ID] Interval Transfer Bandwidth [ 4] 0.0-60.0 sec 647 MBytes 90.4 Mbits/sec [ 4] Sen461470 datagrams [ 3] 0.0-60.0 sec 646 MBytes 90.4 Mbits/sec 0.036 ms 441/461537 (0.096%) [ 3] 0.0-60.0 sec 1 datagrams received out-of-order [ 4] Server Report: [ 4] 0.0-60.0 sec 647 MBytes 90.4 Mbits/sec 0.299 ms 0/461469 (0%) [ 4] 0.0-60.0 sec 1 datagrams received out-of-order [root@ ~]# However, could you guys explain, whadoes this "netelimit" value mean? As I understand, it's memory('number of packets' * 'packet size') allocated to queue.. And why onneeds to increasthe "netem limit" if RTT increases iorder to avoid packeloss? regards, martin 2011/3/31 ChristiaBeier <beier ainformatik.hu-berlin.de>: > OThu, 31 Mar 2011 09:19:09 -0700 > StepheHemminger <shemminger alinux-foundation.org> wrote: > >> OThu, 31 Mar 2011 17:59:59 +0200 >> ChristiaBeier <beier ainformatik.hu-berlin.de> wrote: >> >> > OThu, 31 Mar 2011 08:43:34 -0700 >> > StepheHemminger <shemminger alinux-foundation.org> wrote: >> > >> > > OTue, 29 Mar 2011 23:20:16 +0200 >> > > ChristiaBeier <beier ainformatik.hu-berlin.de> (by way of Christian Beier <beier at informatik.hu-berlin.de>) wrote: >> > > >> > > > >> > > > Hi there, >> > > > I'using 'tc qdisc add dev eth0 roonetem delay 250ms' on a Ubuntu >> > > > 10.10 server box to emulata WAto test out a custom protocol. The >> > > > clienmachinis connected via Fast Ethernet with a switch in between. >> > > > Testing UDP throughpuwith iperf reveals thanetem delay affects UDP >> > > > throughput, which to my understanding shouldn'happen. Is this a known >> > > > issue? This was raised before >> > > > (https://lists.linux-foundation.org/pipermail/netem/2006-May/000917.html), >> > > > buwith no decisivoutcome... >> > > > >> > > >> > > You probably don'hava big enough queue to hold 250ms of full packet rate >> > > and packets argetting dropped. >> > >> > Well, varying thsockesend buffer size didn't help, in fact it >> > didn'changanything, even with a send buffer as huge as 80 MB... >> > Or aryou reffering to something elsI overlooked in the docs? >> >> Thqueuis the one in netem (limit parameter). > > D'oh. Thanks a lot! (Maybthis should bexplicitly mentioned in the > docs?) > > Thanks again, > ? Christian > > > -- > whais, is; > whais nois possible. > > _______________________________________________ > Netemailing list > Netealists.linux-foundation.org > https://lists.linux-foundation.org/mailman/listinfo/netem > Froosenbach aus.ibm.com Tue Apr 5 03:00:46 2011 From: osenbach aus.ibm.co(Bryan Osenbach) Date: Tue, 5 Apr 2011 04:00:46 -0600 Subject: AUTO: BryaOsenbach is ouof the office with limited access to email and Sametime. (returning 04/06/2011) Message-ID: <OFD762BC72.E838BCB8-ON87257869.003700A4-87257869.003700A5@xxxxxxxxxx> I aouof the office until 04/06/2011. I will respond to your messagwhen I return. Note: This is aautomated responsto your message "Netem Digest, Vol 67, Issu1" senon 4/4/11 13:00:05. This is thonly notification you will receivwhile this person is away. -------------- nexpar-------------- AHTML attachmenwas scrubbed... URL: http://lists.linux-foundation.org/pipermail/netem/attachments/20110405/06ed3d01/attachment.ht Frolyt0112 agmail.com Wed Apr 6 05:12:52 2011 From: lyt0112 agmail.co(lyt0112 at gmail.com) Date: Wed, 06 Apr 2011 12:12:52 +0000 Subject: Could I usneteto reduce the effect of packet reordering? Message-ID: <001636ed68ab76f66904a03ee9b9@xxxxxxxxxx> Hello All My network has high reordering situation,and icausmy tcp video shown on thvideo clienunsmoothly(Quick and Slow). I noticed thNetesupport generate latency and reordering... I wanto know...do i hava chance to use the latency function to wait the ouof order packets and then i can sethe video smoothly? Any suggestions will bappreciated!!! BriaLu -------------- nexpar-------------- AHTML attachmenwas scrubbed... URL: http://lists.linux-foundation.org/pipermail/netem/attachments/20110406/09d273ec/attachment.ht Froshemminger alinux-foundation.org Wed Apr 6 08:21:56 2011 From: shemminger alinux-foundation.org (Stephen Hemminger) Date: Wed, 6 Apr 2011 08:21:56 -0700 Subject: Could I usneteto reduce the effect of packet reordering? In-Reply-To: <001636ed68ab76f66904a03ee9b9@xxxxxxxxxx> References: <001636ed68ab76f66904a03ee9b9@xxxxxxxxxx> Message-ID: <20110406082156.5df562bd@nehalam> OWed, 06 Apr 2011 12:12:52 +0000 lyt0112 agmail.cowrote: > Hello All > > My network has high reordering situation,and icausmy tcp video shown on > thvideo clienunsmoothly(Quick and Slow). > I noticed thNetesupport generate latency and reordering... > I wanto know...do i hava chance to use the latency function to wait the > ouof order packets and then i can sethe video smoothly? > > Any suggestions will bappreciated!!! > > BriaLu No neteis used to maklatency and reordering worse for testing. Frojonathan.cronat magorcorp.com Wed Apr 6 13:01:41 2011 From: jonathan.cronamagorcorp.com (Jonathan P. Crone) Date: Wed, 6 Apr 2011 16:01:41 -0400 Subject: Aboureordering with netem: and how to gejitter WITHOUT reordering... Message-ID: <F8DE3BF9-3721-4FB3-A1F9-D7FE9B6D6EF4@xxxxxxxxxxxxx> BriaLu's recenquestion about using Netem to clean up reordering and Stephen's responsnetebeing designed to make reordering worse for testing purposes has prompted mto ask a question I havbeen meaning to ask for quitsome time. Specifically: Thnetepage on the linuxfoundation website SEEMS to havan error in its description of using a pfifo of how to gejitter withoureordering. Testing o3 differenkernels, I get a varying behaviour (mostly errors) using thexamplfrom the linuxfoundation website. Details follow: I work ia telecollaboration developmencompany and use netem for creatioof various impaired networks for video streaming testing. Onarea in which netedoes not meet my needs is the introduction of jitter. Thspecific issuis reported as a virtue on the authoritative netepagat the linux foundation with the following suggestion: (which is technically a very pathological example, imy experience only a cellular data typnetwork would havTHIS MUCH jitter... 10ms plus/minus 100ms!!! ) quoting thwebsite: Starting with versio1.1 (in 2.6.15), netewill reorder packets if the delay value has lots of jitter. If you don'wanthis behaviour then replace the internal queue discipline tfifo with a pure packet fifo pfifo. The following example has lots of jitter, but the packets will stay in order. # tc qdisc add dev eth0 roohandl1: netem delay 10ms 100ms # tc qdisc add dev eth0 paren1:1 pfifo limi1000 Thprobleis: I DO NOT want 'reorder' i just want the jitter. (IE: I wan thinterval between packet 100 and packet 101 to be 100ms, thinterval between packe101 and 102 to be 92 ms, between packe102 and 103 to b105 ms etc.) (IF I wanpacke 102 to arrive before packet 101, I want to explicitly control thareorder behaviour independently of jitter. ) (a bonded T1 mightypically havout of order packets, bua very stabldelay distribution ) So aexamplof how I would do this: This should pua static 100ms delay, with a randojitter Plus/Minus 10ms . tc qdisc add dev eth0 roohandl1: netem delay 100ms 10ms This causes jitter AND reordering... Thnetepage says, add a pfifo to eliminate the reorder: Here's thproblem: Omy 9.10 ubuntu boxes running th2.6.31-14-generic kernel: if I usthexample of enabling the pfifo as a child of the netem parent, I gean error message: RTNETLINK answers: Invalid argument WheI attempto enable the pfifo... If I try omy ubuntu 10.04 boxes running th2.6.32-28-generic kernel, and I try to enablthpfifo, I get error message: RTNETLINK answers: Operationosupported If I usan ubuntu 8.04 box running a 2.6.24-27 kernel, I don'gean RTNETLINK error, I gejitter, and I don'get the reordering (My desired behaviour ) Probleis: th8.04 boxes are my "application" boxes... So.... clearly its tied to kernel behaviour.... buI apologize for being aa biof a loss as to why the 'authoritative' netem referenc would provida solution which doesn't seem to work osomof the kernels currently out there. (for my lab's needs: th9.10 boxes arthe "default" netem boxes, and th8.04 and 10.04 boxes arrunning the application endpoints. ) Having thjitter work on th8.04 based systems is not optimal, as th9.10 boxes arthe 'stable' network emulator boxes, representing thpip that the 8.04 and 10.04 talk over. Suggestions, thoughts, recommendations would bappreciated. JonathaCronP.Eng VerificatioEngineering Magor Communications jonathan.cronamagorcorp.com -------------- nexpar-------------- AHTML attachmenwas scrubbed... URL: http://lists.linux-foundation.org/pipermail/netem/attachments/20110406/eaa70cd5/attachment.ht Frobernaat luffy.cx Sat Apr 23 02:17:17 2011 From: bernaaluffy.cx (Vincent Bernat) Date: Sat, 23 Apr 2011 11:17:17 +0200 Subject: Simulatseveral low bandwidth links over thsame link (usof drr) Message-ID: <m3ei4tmin6.fsf@xxxxxxxxxxxx> Hi! I hava router with som100 MBps link and I would like to use it to simulaseveral "slow" links on top of this to do some tests. For example, I wan thtraffic from IP A to be shapped to 5 Mbps and delayed of 100 ms to simulatsomADSL link. Traffic from IP B should bshapped to 300 Kbps and delayed of 200 ms to simulatsome 3G link, etc. I starfrothe following example from netem documentation which works fine: # tc qdisc add dev eth0 roohandl1: prio # tc qdisc add dev eth0 paren1:3 handl30: tbf rate 10Mbit buffer 1600 limit 3000 # tc qdisc add dev eth0 paren30:1 handl31: netem delay 200ms 10ms distribution normal # tc filter add dev eth0 protocol ip paren1:0 prio 3 u32 match u32 0 0 flowid 1:3 Imy case, I don'want to use "prio" because there is not enough bands (16 bands max). I havfound "drr" instead which seemed fin(moreover, contrary to prio, theris fairness involved). Thmanual page says: ,---- | LikSFQ, DRR is only useful when i owns the queue -- it is a pure | scheduler and does nodelay packets. Attaching non-work-conserving | qdiscs liktbf to idoes not make sense -- other qdiscs in the active | lis will also becom inactive until the dequeue operation succeeds. | Embed DRR withianother qdisc likHTB or HFSC to ensure it owns the | queue. `---- Well, I don'quitunderstand. prio is also a work-conserving qdisc and has no probleto embed non-work-conserving qdiscs. I don'see why drr should block wheonof the qdisc don't have a packet ready. So, I ignored thwarning and heris what I tried: # tc qdisc add dev eth0 roohandl1: drr # tc class add dev eth0 paren1: classid 1:13 drr # tc qdisc add dev eth0 paren1:13 handl13: tbf rate 5Mbit buffer 1600 limit 3000 # tc qdisc add dev eth0 paren13:1 handl130: netem delay 200ms 10ms distribution normal # tc filter add dev eth0 protocol ip paren1:0 prio 3 u32 match u32 0 0 flowid 1:13 # ping www.google.fr (timeout) # tc -s class ls dev eth0 class drr 1:13 rooleaf 13: quantu1514b Sen20889 bytes 207 pk(dropped 0, overlimits 0 requeues 0) backlog 0b 0p requeues 0 defici0b class tbf 13:1 paren13: leaf 130: # tc -s class ls dev eth0 class drr 1:13 rooleaf 13: quantu1514b Sen31297 bytes 276 pk(dropped 0, overlimits 0 requeues 0) backlog 0b 0p requeues 0 defici0b class tbf 13:1 paren13: leaf 130: Whais wrong? Whaother qdisc could I use at the root of my device withouhaving to specify sombandwidth? I could cascade several prio qdisc to gemorbands but when my bandwith will starve, there will be no placfor flow thaare deep in the hierarchy. Thanks for any hint. -- printk("MASQUERADE: No route: Rusty's braibroke!\n"); 2.4.3. linux/net/ipv4/netfilter/ipt_MASQUERADE.c Froacmorton aatt.com Tue Apr 26 20:07:00 2011 From: acmortoaatt.com (Al Morton) Date: Tue, 26 Apr 2011 23:07:00 -0400 Subject: neteand VLANs Message-ID: <201104270306.p3R36KEL013155@xxxxxxxxxxxxxxxxxxxx> Hi netem-list, I'using neteon a VLAN trunk as part of testing for thIPPM working group in IETF. Aa high level, four VLANs arcrossing the host implementing netem with four "brctl" bridges accomplishing thconnectivity between two Etherneinterfaces (eth1 and 2). I seup four instances of netequeuing disciplines, one oeach VLAN: tc qdisc changdev eth1.100 roonetem delay 100ms 50ms tc qdisc changdev eth1.200 roonetem delay 100ms 50ms tc qdisc changdev eth1.300 roonetem delay 100ms 50ms tc qdisc changdev eth1.400 roonetem delay 100ms 50ms and iseems to work as designed, excepthat sometimes onof thtest packet streams traversing the bridging host experiences mordelay than thothers. I suspect that thermay bsome contention for resources and therefore somdelay introduced thanetem doesn't know about. OTOH, idoes seepossible to introduce a single netem instance odev eth1 thasuccessfully adds delay to the packets in all VLANs. Hoping someonhas experiencrunning multiple instances of netem or neteon VLANs thamight be relevant here (I scanned the archivback many months and didn'see anything, but feel free to poinmto a thread from the past with all the answers). thanks for reading, Al Frobernaat luffy.cx Wed Apr 27 04:55:16 2011 From: bernaaluffy.cx (Vincent Bernat) Date: Wed, 27 Apr 2011 13:55:16 +0200 Subject: Simulatseveral low bandwidth links over thsame link (usof drr) In-Reply-To: <m3ei4tmin6.fsf@xxxxxxxxxxxx> References: <m3ei4tmin6.fsf@xxxxxxxxxxxx> Message-ID: <6cf86568a520a631b7d059dc9dc73524@xxxxxxxx> OSat, 23 Apr 2011 11:17:17 +0200, VincenBernat wrote: > So, I ignored thwarning and heris what I tried: > > # tc qdisc add dev eth0 roohandl1: drr > # tc class add dev eth0 paren1: classid 1:13 drr > # tc qdisc add dev eth0 paren1:13 handl13: tbf rate 5Mbit buffer > 1600 limi3000 > # tc qdisc add dev eth0 paren13:1 handl130: netem delay 200ms > 10ms distributionormal > # tc filter add dev eth0 protocol ip paren1:0 prio 3 u32 match u32 > 0 0 flowid 1:13 > # ping www.google.fr > (timeout) Ifact, this setup is almosworking. drr is also filtering ARP packets. I added a filter to leARP packets pass and everything is working finnow: tc qdisc add dev $ifacroohandle 1: drr tc class add dev $ifacparen1: classid 1:1 drr tc qdisc add dev $ifacparen1:1 handle 1001: \ sfq tc filter add dev $ifacprotocol arp paren1:0 \ prio 1 u32 match u32 0 0 flowid 1:1 tc class add dev eth0 paren1: classid 1:13 drr tc qdisc add dev eth0 paren1:13 handl13: tbf \ rat5Mbibuffer 1600 limit 3000 tc qdisc add dev eth0 paren13:1 handl130: netem \ delay 200ms 10ms distributionormal tc filter add dev $ifacprotocol ip paren1:0 \ prio 2 u32 match u32 0 0 flowid 1:13 Froshemminger alinux-foundation.org Thu Apr 28 15:29:49 2011 From: shemminger alinux-foundation.org (Stephen Hemminger) Date: Thu, 28 Apr 2011 15:29:49 -0700 Subject: neteand VLANs In-Reply-To: <201104270306.p3R36KEL013155@xxxxxxxxxxxxxxxxxxxx> References: <201104270306.p3R36KEL013155@xxxxxxxxxxxxxxxxxxxx> Message-ID: <20110428152949.6725a287@nehalam> OTue, 26 Apr 2011 23:07:00 -0400 Al Morto<acmorton aatt.com> wrote: > Hi netem-list, > > I'using neteon a VLAN trunk as part of testing for > thIPPM working group in IETF. > > Aa high level, four VLANs arcrossing the host implementing netem > with four "brctl" bridges accomplishing thconnectivity between > two Etherneinterfaces (eth1 and 2). > I seup four instances of netequeuing disciplines, one > oeach VLAN: > > tc qdisc changdev eth1.100 roonetem delay 100ms 50ms > tc qdisc changdev eth1.200 roonetem delay 100ms 50ms > tc qdisc changdev eth1.300 roonetem delay 100ms 50ms > tc qdisc changdev eth1.400 roonetem delay 100ms 50ms > > and iseems to work as designed, excepthat sometimes > onof thtest packet streams traversing the bridging host > experiences mordelay than thothers. I suspect that > thermay bsome contention for resources and therefore > somdelay introduced thanetem doesn't know about. > > OTOH, idoes seepossible to introduce a single netem instance > odev eth1 thasuccessfully adds delay to the packets in all > VLANs. > > Hoping someonhas experiencrunning multiple instances of netem > or neteon VLANs thamight be relevant here (I scanned the > archivback many months and didn'see anything, but feel free > to poinmto a thread from the past with all the answers). > Theris by defaulno queuing at the VLAN level so the qdisc therhas no effect. Thqueuis at the physical device level, so a way to achieve sameffecis to use some classful discipline (drr, priority, ...) and assignetefor each sub-discpline (child) with different values.