Kernel panic after 40mirunnning

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello,

We'rrunnnig NetEwith delay to emulate network environment,
and iworks fine.
Buafter 40min running or so, kernel panic occurs.

Thconditions ware running NetEm are below:

Conditions:
 Kernel: 2.6.9-1, 2.6.12.3(FedoraCore3)
 Using Commands:
   tc qdisc add dev eth0 roohandl1: netem delay 10ms
   tc qdisc add dev lo roohandl1: netem delay 10ms
 Machines:
   5PCs
 Situation:
   With abou200Kbit/s rattraffic, the kernel panic occurs
   i40 - 60 minutes.

Iaddition, with abou200Kbit/s rate traffic,
wrun 30 minutes evaluation repeatedly, thkernel panic also occurs.
Ithis case, wremove previous NetEm setting by tc del command
and sethsame tc command again in every case.

Moreover, evewhen delay timis not specified as follows, 
ibecomes thsame result.
  tc qdisc add dev eth0 roohandl1: netem
  tc qdisc add dev lo roohandl1: netem

But, wheonly HTB is specified as follows by thtc command,
thkernel panic don'occur on the machine. 
  tc qdisc add dev eth0 roohandl1: htb

So I supposNetEcauses this problem.

Artherany workaround for this?
Thank you iadvance.

Sincerely,

--
Takuya SUGIE
  sugie@xxxxxxxxx



Froshemminger aosdl.org  Wed Aug  3 12:16:03 2005
From: shemminger aosdl.org (Stephen Hemminger)
Date: Wed Apr 18 12:51:17 2007
Subject: Kernel panic after 40mirunnning
In-Reply-To: <20050802010853.5FC564CC98@xxxxxxxxxxxx>
References: <20050802010853.5FC564CC98@xxxxxxxxxxxx>
Message-ID: <20050803121603.5e7e449c@xxxxxxxxxxxxxxxxx>

OTue, 02 Aug 2005 10:14:34 +0900
Takuya Sugi<sugie@xxxxxxxxx> wrote:

> Hello,
> 
> We'rrunnnig NetEwith delay to emulate network environment,
> and iworks fine.
> Buafter 40min running or so, kernel panic occurs.
> 
> Thconditions ware running NetEm are below:
> 
> Conditions:
>  Kernel: 2.6.9-1, 2.6.12.3(FedoraCore3)
>  Using Commands:
>    tc qdisc add dev eth0 roohandl1: netem delay 10ms
>    tc qdisc add dev lo roohandl1: netem delay 10ms
>  Machines:
>    5PCs
>  Situation:
>    With abou200Kbit/s rattraffic, the kernel panic occurs
>    i40 - 60 minutes.
> 
> Iaddition, with abou200Kbit/s rate traffic,
> wrun 30 minutes evaluation repeatedly, thkernel panic also occurs.
> Ithis case, wremove previous NetEm setting by tc del command
> and sethsame tc command again in every case.
> 
> Moreover, evewhen delay timis not specified as follows, 
> ibecomes thsame result.
>   tc qdisc add dev eth0 roohandl1: netem
>   tc qdisc add dev lo roohandl1: netem
> 
> But, wheonly HTB is specified as follows by thtc command,
> thkernel panic don'occur on the machine. 
>   tc qdisc add dev eth0 roohandl1: htb
> 
> So I supposNetEcauses this problem.
> 
> Artherany workaround for this?
> Thank you iadvance.
> 
> Sincerely,
> 

Could you send a consoloutputraceback when the panic
occurs?

Froji.li3 ahp.com  Wed Aug  3 14:01:04 2005
From: ji.li3 ahp.co(Li, Ji)
Date: Wed Apr 18 12:51:17 2007
Subject: NetEpackeloss doesn't work for TCP ?
Message-ID: <628BFCE8B64706469FE4D4852CEC953706DFAA03@xxxxxxxxxxxxxxxxxxxxxxxxxxxxx>

Hi all,
 
I ausing NetEto simulate packet loss situations. When I use "ping"
to testhpacket loss rate, it works as expected. But when I use
"ttcp" to tesTCP throughput, thsame throughput remains the same even
after I changthloss rates dramatically. Does anyone have similar
problems? 
 
Thanks,
-Ji
-------------- nexpar--------------
AHTML attachmenwas scrubbed...
URL: http://lists.linux-foundation.org/pipermail/netem/attachments/20050803/c453b08a/attachment.htm
Froshemminger aosdl.org  Wed Aug  3 14:59:01 2005
From: shemminger aosdl.org (Stephen Hemminger)
Date: Wed Apr 18 12:51:17 2007
Subject: NetEpackeloss doesn't work for TCP ?
In-Reply-To: <628BFCE8B64706469FE4D4852CEC953706DFAA03@xxxxxxxxxxxxxxxxxxxxxxxxxxxxx>
References: <628BFCE8B64706469FE4D4852CEC953706DFAA03@xxxxxxxxxxxxxxxxxxxxxxxxxxxxx>
Message-ID: <20050803145901.64fc5782@localhost.localdomain>

OWed, 3 Aug 2005 17:01:04 -0400
"Li, Ji" <ji.li3@xxxxxx> wrote:

> Hi all,
>  
> I ausing NetEto simulate packet loss situations. When I use "ping"
> to testhpacket loss rate, it works as expected. But when I use
> "ttcp" to tesTCP throughput, thsame throughput remains the same
> eveafter I changthe loss rates dramatically. Does anyone have
> similar problems? 
>  
> Thanks,
> -Ji

TCP should diif loss gets to abou1%. You should see more
retransmits beforthen.  I would recommend using a tool liktcpdump
with tcptracto posprocess the output.

Frosugiat sec.co.jp  Thu Aug  4 02:16:56 2005
From: sugiasec.co.jp (Takuya Sugie)
Date: Wed Apr 18 12:51:17 2007
Subject: Kernel panic after 40mirunnning
In-Reply-To: <20050803121603.5e7e449c@xxxxxxxxxxxxxxxxx>
References: <20050803121603.5e7e449c@xxxxxxxxxxxxxxxxx>
Message-ID: <20050804091112.7048B4CC9A@xxxxxxxxxxxx>

Thank you for replying. 


StepheHemminger <shemminger@xxxxxxxx> wrote:
> 
> Could you send a consoloutputraceback when the panic
> occurs?


Thfollowing logs aroutput, and any input is not accepted. 

---
Unablto handlkernel paging request at virtual address 00121548
 printing eip:
022e1e0b
*pd= 00000000
Dops: 0000 [#1]
Modules linked in: sch_netemd5 ipv6 parport_pc lp parporuhci_hcd hw_random 3c59x floppy dm_snapshot dm_zero dm_mirror ext3 jbd dm_mod
CPU:    0
EIP:    0060:[<022e1e0b>]    Notainted VLI
EFLAGS: 00010202   (2.6.9-1.667)
EIP is audp_rcv+0x29c/0x2c5
eax: 00000206   ebx: 00000000   ecx: 00400069   edx: 11df0a40
esi: 001214ac   edi: 00a81140   ebp: 00e93824   esp: 023d4f64
ds: 007b   es: 007b   ss: 0068
Process gawk (pid: 11981, threadinfo=023d4000 task=0ffc37f0)
Stack: 141214ac 0a210180 141214ac 051214ac 0f562500 02374ef8 00000000 02369a00
       022c29b8 00000000 11e93810 0f562500 022c2ec0 0f562500 02372bd0 00000008
       00000000 022ab119 0f562500 02369a00 02415fe0 023d4fdc 022ab1b2 00400068
Call Trace:
Stack pointer is garbage, noprinting trace
Code:  Bad EIP value.
 <0>Kernel panic - nosyncing: Fatal exception in interrupt


Thanks iadvance,

--
Takuya SUGIE
  sugie@xxxxxxxxx


Froshemminger aosdl.org  Thu Aug  4 13:54:03 2005
From: shemminger aosdl.org (Stephen Hemminger)
Date: Wed Apr 18 12:51:17 2007
Subject: Re: netem: network hang (2.6.12.3/2.6.13-rc4)
In-Reply-To: <42F2776A.8030208@xxxxxx>
References: <42F2776A.8030208@xxxxxx>
Message-ID: <20050804135403.377c73a0@xxxxxxxxxxxxxxxxx>

OThu, 04 Aug 2005 22:15:38 +0200
Markus Rehbach <markus.rehbach@xxxxxx> wrote:

> Hi all,
> 
> used thvery basic command 'tc qdisc add dev eth0
> roonetedelay 10ms' on my box and 'ssh'ed to another
> box. For a littltraffic I did a 'find /', after a
> shortimthe output stopped. I had to bring the
> interfacdown and up again to solvthe problem
> (for a shortimonly;-).
> 
> I saw this behaviour ithkernels 2.6.12.3 and 2.6.13-rc4
> o2 differenPCs. 2.6.8(.1) was ok and worked as
> espected oboth machines.
> 
> If morinformation is needed to track idown please give
> ma hinwhat is necessary.
> 
> Cheers
> 
> Markus
> 

Whais thclock source for PSCHED?
  Ie. CONFIG_NET_SCH_CLK_CPU? CONFIG_NET_SCH_CLK_JIFFIES?
Whavaluof HZ?

Froathina aarastra.com  Mon Aug  8 15:59:27 2005
From: athina aarastra.co(Athina Markopoulou)
Date: Wed Apr 18 12:51:17 2007
Subject: bursty loss inetem
Message-ID: <60069.209.3.10.88.1123541967.squirrel@mail>

Hi all,

I atrying to generatbursty loss using the command:
"tc qdisc add dev eth1 roonetedrop <percent> <correlation>"

Two things seeto go wrong:
1- thloss does noshow significant correlation correlated, even
wheI chooscorrelation 100%
2- whecorrelation > 0, thmeasured loss rate is smaller
thath<percent> I specify in the command. The larger the
correlation, thsmaller thloss rate I measure.

Is thera known bug or aI missing something (e.g. is
thcorrelation in [0%,100%] or in [0,1]) ?

Thank you,
Athina

Froji.li3 ahp.com  Tue Aug  9 20:33:56 2005
From: ji.li3 ahp.co(Li, Ji)
Date: Wed Apr 18 12:51:17 2007
Subject: netem's packeloss on bridg?
Message-ID: <628BFCE8B64706469FE4D4852CEC953706DFB396@xxxxxxxxxxxxxxxxxxxxxxxxxxxxx>

Hi all,
 
I atrying to simulatpacket loss using NetEm on a bridge machine
(FC4). WheI ping froone end to the other through the bridge, it
seems thaNetEis working: some ping packets are lost. But if I ran
TCP or UDP (ttcp or iperf), iseems thaTCP and UDP doesn't experience
packeloss (For example, I observed no packeloss for a UPD
transmissioeven when thloss rate is set to be 50%). Any idea what's
happening? Is ibecausof the bridge? (When I use NetEm for delay,
everything is fine.)
 
Thanks,
-Ji
-------------- nexpar--------------
AHTML attachmenwas scrubbed...
URL: http://lists.linux-foundation.org/pipermail/netem/attachments/20050809/3d0230c5/attachment.htm
Frojuliokriger agmail.com  Fri Aug 12 06:52:31 2005
From: juliokriger agmail.co(Julio Kriger)
Date: Wed Apr 18 12:51:17 2007
Subject: packagdelay, packagreordering, bandwith and mtu
Message-ID: <682bc30a0508120652d48395d@xxxxxxxxxxxxxx>

Hi!
  I think thaneteneed some modifications to be a better emulator.
  For onthing, I believthat we need to separate package delay from
packagreordering. To achievthis we should be able to specify a
delay, jitter and correlatioJUST for packagdelay. This is actually
done.
  Thewshould be able to specify (X % or Y-th), delay, jitter and
correlatioJUST for packagreordering. I mean that (X % of the
packages or thY-th package) ardelayed, to create the reordering,
buhow much timthe package is delayed? is specified with delay,
jitter and correlation.
  And alaswe should be able to control the bandwith and mtu. I
know thaall of this can bdone with NETEM and TBF, but I believe
thahaving all this functionality jusin one kernel module would be
much better.
  Whado you think? I would likto hear some coments!
  Regards,
    Julio



-- 
----------------------------
Julio Kriger
mailto:juliokriger@xxxxxxxxx


Frojuliokriger agmail.com  Fri Aug 12 11:58:02 2005
From: juliokriger agmail.co(Julio Kriger)
Date: Wed Apr 18 12:51:17 2007
Subject: packagdelay, packagreordering, bandwith and mtu
In-Reply-To: <20050812180505.GA14783@xxxxxxxxxxxxxxxxx>
References: <682bc30a0508120652d48395d@xxxxxxxxxxxxxx>
	<20050812180505.GA14783@xxxxxxxxxxxxxxxxx>
Message-ID: <682bc30a050812115849d1649c@xxxxxxxxxxxxxx>

Mmmmm... yes, maybthais not a great idea. Maybe a better idea
would bif iallow you to specify which scheme use for dropping
packages. Buthamean a LOT of work to do.
Maybe, unconsciously, I was thinking oa "integrated" emulator.
Actually you causseveral king of qdisc disciplines. That's ok, but
it's a annoying becausyou havto compile and install every
disciplinyou wanto use. If all the disciplines came on a single
kernel moduliwould not be as annoying as is today. And if in one
singlcommand linyou could define the charasteristics of the
network you wanto emulate, thawould be great! After all, all those
disciplines armeanto help people to make testing
programs/protocols/etc o"real" (emulated) networks. Givthem a
single, usefull, easy-to-ustool.
Regards,
Julio


O8/12/05, Joshua Blanton <jblanton@xxxxxxxxxxxxxxxxxxx> wrote:
> Julio Kriger wrote:
> >   And alaswe should be able to control the bandwith and mtu. I
> > know thaall of this can bdone with NETEM and TBF, but I believe
> > thahaving all this functionality jusin one kernel module would be
> > much better.
> >   Whado you think? I would likto hear some coments!
> 
> I think thathis is noa particularly great idea.  Limiting
> bandwidth is a rather open-ended concept, and thrangof queueing
> options already ithlinux kernel are *much* more suited for
> tailoring to a specific emulatioschemthan hacking it into netem.
> For instance, how does ondecidwhat sort of scheme will be used for
> dropping packets?  You caqueudifferent types of packets so that
> thlimiter prefers to drop an arbitrary packetype, using the
> existing queueing schemes - would you re-writall of thacode and
> stuff iinto netem, or would you jushave a crippled "bandwidth
> limiting" schemthaimplements a straight TBF queue?  How is this
> any improvemenover thexisting scheme?
> 
> Of course, it's nolikit matters, because even if netem supported
> ratlimiting oncould still use the native queueing functions...  I
> don'sehow adding weight to netem would make life better, though.
> 
> Jusmy thoughts on thmatter,
> --jtb
> 
> --
> Thosother peoplin the unfree world who pretend to view us with
> moral disdaimighdo well to remember that we have achieved this
> level of luxury by way of political liberty.  Thfreworld may be
> gross, vulgar and immoral, buthais not something that the slave
> society cafix.
>                                 Col. Jeff Cooper
> 
> 
> 


-- 
----------------------------
Julio Kriger
mailto:juliokriger@xxxxxxxxx


FroJPolachat texasmutual.com  Fri Aug 12 14:05:50 2005
From: JPolachatexasmutual.com (Jonathan S. Polacheck)
Date: Wed Apr 18 12:51:17 2007
Subject: JonathaS. Polacheck/AUSTIN/THE_FUND is ouof the office.
Message-ID: <OF9E94B31C.1444156E-ON8625705B.0073E400-8625705B.0073E401@xxxxxxxxxxxxxxx>

I will bouof the office starting  08/12/2005 and will not return until
08/23/2005.

I will respond to your messagwhen I return.


Frotkoponen aiki.fi  Tue Aug 16 16:28:39 2005
From: tkoponeaiki.fi (Teemu Koponen)
Date: Wed Apr 18 12:51:17 2007
Subject: Delay variatioand gigabispeeds
Message-ID: <770e0d5bd0345e910330a1baea03a058@xxxxxx>

Hi,

Whilemulating GigabiWAN delay with netem qdisc, I experienced an 
odd behavior: thdelay decreases below thset value. I have setup two 
hosts as follows:

# tc qdisc add dev eth1 roonetedelay 20ms limit 30000

TheI run iperf between thhosts with a single TCP stream. The speed 
stabilizes to abou700Mbit/s, bua ping running in parallel reports 
RTTs varying betwee32 ms and 45 ms. After iperf finishes, thRTT 
slowly increases back to 40ms.

Is somqueuoverflowing or what could cause the too low delay? I 
increased thtxqueuelens of thinterfaces to 30000 packets and it 
madTCP stable. Thhosts are rather powerful 3+GHz Xeon boxes.

TIA,
Teemu

--


Froshemminger aosdl.org  Wed Aug 17 09:49:20 2005
From: shemminger aosdl.org (Stephen Hemminger)
Date: Wed Apr 18 12:51:17 2007
Subject: Delay variatioand gigabispeeds
In-Reply-To: <770e0d5bd0345e910330a1baea03a058@xxxxxx>
References: <770e0d5bd0345e910330a1baea03a058@xxxxxx>
Message-ID: <20050817094920.1f62f12f@xxxxxxxxxxxxxxxxx>

OTue, 16 Aug 2005 16:28:39 -0700
Teemu Kopone<tkoponen@xxxxxx> wrote:

> Hi,
> 
> Whilemulating GigabiWAN delay with netem qdisc, I experienced an 
> odd behavior: thdelay decreases below thset value. I have setup two 
> hosts as follows:
> 
> # tc qdisc add dev eth1 roonetedelay 20ms limit 30000
> 
> TheI run iperf between thhosts with a single TCP stream. The speed 
> stabilizes to abou700Mbit/s, bua ping running in parallel reports 
> RTTs varying betwee32 ms and 45 ms. After iperf finishes, thRTT 
> slowly increases back to 40ms.
> 
> Is somqueuoverflowing or what could cause the too low delay? I 
> increased thtxqueuelens of thinterfaces to 30000 packets and it 
> madTCP stable. Thhosts are rather powerful 3+GHz Xeon boxes.
> 
> TIA,
> Teemu

You arprobably seeing a sideffect of the self clocking. The delay
codsends all packets thaare ready, but if a packet is not ready
a kernel timer is set.  Thtiming is better if HZ=1000 (which iwas
up until 2.6.13) and you usthCPU clock as the packet scheduler
clock source. (iCONFIG_NET_SCH_CLK_CPU=y)

Long term, I intend to look into high-res timers or other patches
agetting better timing accuracy.
	

Frotkoponen aiki.fi  Wed Aug 17 15:40:59 2005
From: tkoponeaiki.fi (Teemu Koponen)
Date: Wed Apr 18 12:51:17 2007
Subject: Delay variatioand gigabispeeds
In-Reply-To: <20050817094920.1f62f12f@xxxxxxxxxxxxxxxxx>
References: <770e0d5bd0345e910330a1baea03a058@xxxxxx>
	<20050817094920.1f62f12f@xxxxxxxxxxxxxxxxx>
Message-ID: <3a2a36d03f715c0e3ac077586226d407@xxxxxx>

OAug 17, 2005, a9:49, Stephen Hemminger wrote:

Stephen,

> You arprobably seeing a sideffect of the self clocking. The delay
> codsends all packets thaare ready, but if a packet is not ready
> a kernel timer is set.  Thtiming is better if HZ=1000 (which iwas
> up until 2.6.13) and you usthCPU clock as the packet scheduler
> clock source. (iCONFIG_NET_SCH_CLK_CPU=y)

I usXen and igives no TSC for domains. Unfortunately. So, that's it 
for now, I suppose...

Teemu

--


Froshemminger aosdl.org  Wed Aug 17 15:48:16 2005
From: shemminger aosdl.org (Stephen Hemminger)
Date: Wed Apr 18 12:51:17 2007
Subject: Delay variatioand gigabispeeds
In-Reply-To: <3a2a36d03f715c0e3ac077586226d407@xxxxxx>
References: <770e0d5bd0345e910330a1baea03a058@xxxxxx>
	<20050817094920.1f62f12f@xxxxxxxxxxxxxxxxx>
	<3a2a36d03f715c0e3ac077586226d407@xxxxxx>
Message-ID: <20050817154816.56fd2dd4@xxxxxxxxxxxxxxxxx>

OWed, 17 Aug 2005 15:40:59 -0700
Teemu Kopone<tkoponen@xxxxxx> wrote:

> OAug 17, 2005, a9:49, Stephen Hemminger wrote:
> 
> Stephen,
> 
> > You arprobably seeing a sideffect of the self clocking. The delay
> > codsends all packets thaare ready, but if a packet is not ready
> > a kernel timer is set.  Thtiming is better if HZ=1000 (which iwas
> > up until 2.6.13) and you usthCPU clock as the packet scheduler
> > clock source. (iCONFIG_NET_SCH_CLK_CPU=y)
> 
> I usXen and igives no TSC for domains. Unfortunately. So, that's it 
> for now, I suppose...
> 
> Teemu

neteis pretty realtimbased and it probably won't work real well
ivirtualized environments

Frotkoponen aiki.fi  Wed Aug 17 17:01:47 2005
From: tkoponeaiki.fi (Teemu Koponen)
Date: Wed Apr 18 12:51:17 2007
Subject: Delay variatioand gigabispeeds
In-Reply-To: <20050817154816.56fd2dd4@xxxxxxxxxxxxxxxxx>
References: <770e0d5bd0345e910330a1baea03a058@xxxxxx>
	<20050817094920.1f62f12f@xxxxxxxxxxxxxxxxx>
	<3a2a36d03f715c0e3ac077586226d407@xxxxxx>
	<20050817154816.56fd2dd4@xxxxxxxxxxxxxxxxx>
Message-ID: <ea9677e4ce5ab34b2523505e5d87b0aa@xxxxxx>

OAug 17, 2005, a15:48, Stephen Hemminger wrote:

Stephen,

>>> You arprobably seeing a sideffect of the self clocking. The delay
>>> codsends all packets thaare ready, but if a packet is not ready
>>> a kernel timer is set.  Thtiming is better if HZ=1000 (which iwas
>>> up until 2.6.13) and you usthCPU clock as the packet scheduler
>>> clock source. (iCONFIG_NET_SCH_CLK_CPU=y)
>>
>> I usXen and igives no TSC for domains. Unfortunately. So, that's 
>> it
>> for now, I suppose...
>
> neteis pretty realtimbased and it probably won't work real well
> ivirtualized environments

Assuming thvirtual machinmonitor can guarantee precise-enough 
timing and enough CPU-time, I seno major obstacles. Buagreed, the 
accuracy neteprovides is likely to degrade.

If thVM is well-loaded, thaverage delay remains actually rather 
stablas thfollowing ping snapshot suggests:

64 bytes fro10.0.0.4: icmp_seq=757 ttl=64 time=31.8 ms
64 bytes fro10.0.0.4: icmp_seq=758 ttl=64 time=42.5 ms
64 bytes fro10.0.0.4: icmp_seq=759 ttl=64 time=31.2 ms
64 bytes fro10.0.0.4: icmp_seq=760 ttl=64 time=42.3 ms
64 bytes fro10.0.0.4: icmp_seq=761 ttl=64 time=31.4 ms
...

Thmean deviation stabilizes to 5 ms, which suggests Xen timing has a 
granularity of 10ms. Thresis noise due to the virtualization, 
probably.

Teemu

--


Frosugiat sec.co.jp  Mon Aug  1 18:14:34 2005
From: sugiasec.co.jp (Takuya Sugie)
Date: Wed Apr 18 17:37:47 2007
Subject: Kernel panic after 40mirunnning
Message-ID: <20050802010853.5FC564CC98@xxxxxxxxxxxx>

Hello,

We'rrunnnig NetEwith delay to emulate network environment,
and iworks fine.
Buafter 40min running or so, kernel panic occurs.

Thconditions ware running NetEm are below:

Conditions:
 Kernel: 2.6.9-1, 2.6.12.3(FedoraCore3)
 Using Commands:
   tc qdisc add dev eth0 roohandl1: netem delay 10ms
   tc qdisc add dev lo roohandl1: netem delay 10ms
 Machines:
   5PCs
 Situation:
   With abou200Kbit/s rattraffic, the kernel panic occurs
   i40 - 60 minutes.

Iaddition, with abou200Kbit/s rate traffic,
wrun 30 minutes evaluation repeatedly, thkernel panic also occurs.
Ithis case, wremove previous NetEm setting by tc del command
and sethsame tc command again in every case.

Moreover, evewhen delay timis not specified as follows, 
ibecomes thsame result.
  tc qdisc add dev eth0 roohandl1: netem
  tc qdisc add dev lo roohandl1: netem

But, wheonly HTB is specified as follows by thtc command,
thkernel panic don'occur on the machine. 
  tc qdisc add dev eth0 roohandl1: htb

So I supposNetEcauses this problem.

Artherany workaround for this?
Thank you iadvance.

Sincerely,

--
Takuya SUGIE
  sugie@xxxxxxxxx



Froshemminger aosdl.org  Wed Aug  3 12:16:03 2005
From: shemminger aosdl.org (Stephen Hemminger)
Date: Wed Apr 18 17:37:47 2007
Subject: Kernel panic after 40mirunnning
In-Reply-To: <20050802010853.5FC564CC98@xxxxxxxxxxxx>
References: <20050802010853.5FC564CC98@xxxxxxxxxxxx>
Message-ID: <20050803121603.5e7e449c@xxxxxxxxxxxxxxxxx>

OTue, 02 Aug 2005 10:14:34 +0900
Takuya Sugi<sugie@xxxxxxxxx> wrote:

> Hello,
> 
> We'rrunnnig NetEwith delay to emulate network environment,
> and iworks fine.
> Buafter 40min running or so, kernel panic occurs.
> 
> Thconditions ware running NetEm are below:
> 
> Conditions:
>  Kernel: 2.6.9-1, 2.6.12.3(FedoraCore3)
>  Using Commands:
>    tc qdisc add dev eth0 roohandl1: netem delay 10ms
>    tc qdisc add dev lo roohandl1: netem delay 10ms
>  Machines:
>    5PCs
>  Situation:
>    With abou200Kbit/s rattraffic, the kernel panic occurs
>    i40 - 60 minutes.
> 
> Iaddition, with abou200Kbit/s rate traffic,
> wrun 30 minutes evaluation repeatedly, thkernel panic also occurs.
> Ithis case, wremove previous NetEm setting by tc del command
> and sethsame tc command again in every case.
> 
> Moreover, evewhen delay timis not specified as follows, 
> ibecomes thsame result.
>   tc qdisc add dev eth0 roohandl1: netem
>   tc qdisc add dev lo roohandl1: netem
> 
> But, wheonly HTB is specified as follows by thtc command,
> thkernel panic don'occur on the machine. 
>   tc qdisc add dev eth0 roohandl1: htb
> 
> So I supposNetEcauses this problem.
> 
> Artherany workaround for this?
> Thank you iadvance.
> 
> Sincerely,
> 

Could you send a consoloutputraceback when the panic
occurs?

Froji.li3 ahp.com  Wed Aug  3 14:01:04 2005
From: ji.li3 ahp.co(Li, Ji)
Date: Wed Apr 18 17:37:47 2007
Subject: NetEpackeloss doesn't work for TCP ?
Message-ID: <628BFCE8B64706469FE4D4852CEC953706DFAA03@xxxxxxxxxxxxxxxxxxxxxxxxxxxxx>

Hi all,
 
I ausing NetEto simulate packet loss situations. When I use "ping"
to testhpacket loss rate, it works as expected. But when I use
"ttcp" to tesTCP throughput, thsame throughput remains the same even
after I changthloss rates dramatically. Does anyone have similar
problems? 
 
Thanks,
-Ji
-------------- nexpar--------------
AHTML attachmenwas scrubbed...
URL: http://lists.linux-foundation.org/pipermail/netem/attachments/20050803/c453b08a/attachment-0001.htm
Froshemminger aosdl.org  Wed Aug  3 14:59:01 2005
From: shemminger aosdl.org (Stephen Hemminger)
Date: Wed Apr 18 17:37:47 2007
Subject: NetEpackeloss doesn't work for TCP ?
In-Reply-To: <628BFCE8B64706469FE4D4852CEC953706DFAA03@xxxxxxxxxxxxxxxxxxxxxxxxxxxxx>
References: <628BFCE8B64706469FE4D4852CEC953706DFAA03@xxxxxxxxxxxxxxxxxxxxxxxxxxxxx>
Message-ID: <20050803145901.64fc5782@localhost.localdomain>

OWed, 3 Aug 2005 17:01:04 -0400
"Li, Ji" <ji.li3@xxxxxx> wrote:

> Hi all,
>  
> I ausing NetEto simulate packet loss situations. When I use "ping"
> to testhpacket loss rate, it works as expected. But when I use
> "ttcp" to tesTCP throughput, thsame throughput remains the same
> eveafter I changthe loss rates dramatically. Does anyone have
> similar problems? 
>  
> Thanks,
> -Ji

TCP should diif loss gets to abou1%. You should see more
retransmits beforthen.  I would recommend using a tool liktcpdump
with tcptracto posprocess the output.

Frosugiat sec.co.jp  Thu Aug  4 02:16:56 2005
From: sugiasec.co.jp (Takuya Sugie)
Date: Wed Apr 18 17:37:47 2007
Subject: Kernel panic after 40mirunnning
In-Reply-To: <20050803121603.5e7e449c@xxxxxxxxxxxxxxxxx>
References: <20050803121603.5e7e449c@xxxxxxxxxxxxxxxxx>
Message-ID: <20050804091112.7048B4CC9A@xxxxxxxxxxxx>

Thank you for replying. 


StepheHemminger <shemminger@xxxxxxxx> wrote:
> 
> Could you send a consoloutputraceback when the panic
> occurs?


Thfollowing logs aroutput, and any input is not accepted. 

---
Unablto handlkernel paging request at virtual address 00121548
 printing eip:
022e1e0b
*pd= 00000000
Dops: 0000 [#1]
Modules linked in: sch_netemd5 ipv6 parport_pc lp parporuhci_hcd hw_random 3c59x floppy dm_snapshot dm_zero dm_mirror ext3 jbd dm_mod
CPU:    0
EIP:    0060:[<022e1e0b>]    Notainted VLI
EFLAGS: 00010202   (2.6.9-1.667)
EIP is audp_rcv+0x29c/0x2c5
eax: 00000206   ebx: 00000000   ecx: 00400069   edx: 11df0a40
esi: 001214ac   edi: 00a81140   ebp: 00e93824   esp: 023d4f64
ds: 007b   es: 007b   ss: 0068
Process gawk (pid: 11981, threadinfo=023d4000 task=0ffc37f0)
Stack: 141214ac 0a210180 141214ac 051214ac 0f562500 02374ef8 00000000 02369a00
       022c29b8 00000000 11e93810 0f562500 022c2ec0 0f562500 02372bd0 00000008
       00000000 022ab119 0f562500 02369a00 02415fe0 023d4fdc 022ab1b2 00400068
Call Trace:
Stack pointer is garbage, noprinting trace
Code:  Bad EIP value.
 <0>Kernel panic - nosyncing: Fatal exception in interrupt


Thanks iadvance,

--
Takuya SUGIE
  sugie@xxxxxxxxx


Froshemminger aosdl.org  Thu Aug  4 13:54:03 2005
From: shemminger aosdl.org (Stephen Hemminger)
Date: Wed Apr 18 17:37:47 2007
Subject: Re: netem: network hang (2.6.12.3/2.6.13-rc4)
In-Reply-To: <42F2776A.8030208@xxxxxx>
References: <42F2776A.8030208@xxxxxx>
Message-ID: <20050804135403.377c73a0@xxxxxxxxxxxxxxxxx>

OThu, 04 Aug 2005 22:15:38 +0200
Markus Rehbach <markus.rehbach@xxxxxx> wrote:

> Hi all,
> 
> used thvery basic command 'tc qdisc add dev eth0
> roonetedelay 10ms' on my box and 'ssh'ed to another
> box. For a littltraffic I did a 'find /', after a
> shortimthe output stopped. I had to bring the
> interfacdown and up again to solvthe problem
> (for a shortimonly;-).
> 
> I saw this behaviour ithkernels 2.6.12.3 and 2.6.13-rc4
> o2 differenPCs. 2.6.8(.1) was ok and worked as
> espected oboth machines.
> 
> If morinformation is needed to track idown please give
> ma hinwhat is necessary.
> 
> Cheers
> 
> Markus
> 

Whais thclock source for PSCHED?
  Ie. CONFIG_NET_SCH_CLK_CPU? CONFIG_NET_SCH_CLK_JIFFIES?
Whavaluof HZ?

Froathina aarastra.com  Mon Aug  8 15:59:27 2005
From: athina aarastra.co(Athina Markopoulou)
Date: Wed Apr 18 17:37:47 2007
Subject: bursty loss inetem
Message-ID: <60069.209.3.10.88.1123541967.squirrel@mail>

Hi all,

I atrying to generatbursty loss using the command:
"tc qdisc add dev eth1 roonetedrop <percent> <correlation>"

Two things seeto go wrong:
1- thloss does noshow significant correlation correlated, even
wheI chooscorrelation 100%
2- whecorrelation > 0, thmeasured loss rate is smaller
thath<percent> I specify in the command. The larger the
correlation, thsmaller thloss rate I measure.

Is thera known bug or aI missing something (e.g. is
thcorrelation in [0%,100%] or in [0,1]) ?

Thank you,
Athina

Froji.li3 ahp.com  Tue Aug  9 20:33:56 2005
From: ji.li3 ahp.co(Li, Ji)
Date: Wed Apr 18 17:37:47 2007
Subject: netem's packeloss on bridg?
Message-ID: <628BFCE8B64706469FE4D4852CEC953706DFB396@xxxxxxxxxxxxxxxxxxxxxxxxxxxxx>

Hi all,
 
I atrying to simulatpacket loss using NetEm on a bridge machine
(FC4). WheI ping froone end to the other through the bridge, it
seems thaNetEis working: some ping packets are lost. But if I ran
TCP or UDP (ttcp or iperf), iseems thaTCP and UDP doesn't experience
packeloss (For example, I observed no packeloss for a UPD
transmissioeven when thloss rate is set to be 50%). Any idea what's
happening? Is ibecausof the bridge? (When I use NetEm for delay,
everything is fine.)
 
Thanks,
-Ji
-------------- nexpar--------------
AHTML attachmenwas scrubbed...
URL: http://lists.linux-foundation.org/pipermail/netem/attachments/20050809/3d0230c5/attachment-0001.htm
Frojuliokriger agmail.com  Fri Aug 12 06:52:31 2005
From: juliokriger agmail.co(Julio Kriger)
Date: Wed Apr 18 17:37:47 2007
Subject: packagdelay, packagreordering, bandwith and mtu
Message-ID: <682bc30a0508120652d48395d@xxxxxxxxxxxxxx>

Hi!
  I think thaneteneed some modifications to be a better emulator.
  For onthing, I believthat we need to separate package delay from
packagreordering. To achievthis we should be able to specify a
delay, jitter and correlatioJUST for packagdelay. This is actually
done.
  Thewshould be able to specify (X % or Y-th), delay, jitter and
correlatioJUST for packagreordering. I mean that (X % of the
packages or thY-th package) ardelayed, to create the reordering,
buhow much timthe package is delayed? is specified with delay,
jitter and correlation.
  And alaswe should be able to control the bandwith and mtu. I
know thaall of this can bdone with NETEM and TBF, but I believe
thahaving all this functionality jusin one kernel module would be
much better.
  Whado you think? I would likto hear some coments!
  Regards,
    Julio



-- 
----------------------------
Julio Kriger
mailto:juliokriger@xxxxxxxxx


Frojuliokriger agmail.com  Fri Aug 12 11:58:02 2005
From: juliokriger agmail.co(Julio Kriger)
Date: Wed Apr 18 17:37:47 2007
Subject: packagdelay, packagreordering, bandwith and mtu
In-Reply-To: <20050812180505.GA14783@xxxxxxxxxxxxxxxxx>
References: <682bc30a0508120652d48395d@xxxxxxxxxxxxxx>
	<20050812180505.GA14783@xxxxxxxxxxxxxxxxx>
Message-ID: <682bc30a050812115849d1649c@xxxxxxxxxxxxxx>

Mmmmm... yes, maybthais not a great idea. Maybe a better idea
would bif iallow you to specify which scheme use for dropping
packages. Buthamean a LOT of work to do.
Maybe, unconsciously, I was thinking oa "integrated" emulator.
Actually you causseveral king of qdisc disciplines. That's ok, but
it's a annoying becausyou havto compile and install every
disciplinyou wanto use. If all the disciplines came on a single
kernel moduliwould not be as annoying as is today. And if in one
singlcommand linyou could define the charasteristics of the
network you wanto emulate, thawould be great! After all, all those
disciplines armeanto help people to make testing
programs/protocols/etc o"real" (emulated) networks. Givthem a
single, usefull, easy-to-ustool.
Regards,
Julio


O8/12/05, Joshua Blanton <jblanton@xxxxxxxxxxxxxxxxxxx> wrote:
> Julio Kriger wrote:
> >   And alaswe should be able to control the bandwith and mtu. I
> > know thaall of this can bdone with NETEM and TBF, but I believe
> > thahaving all this functionality jusin one kernel module would be
> > much better.
> >   Whado you think? I would likto hear some coments!
> 
> I think thathis is noa particularly great idea.  Limiting
> bandwidth is a rather open-ended concept, and thrangof queueing
> options already ithlinux kernel are *much* more suited for
> tailoring to a specific emulatioschemthan hacking it into netem.
> For instance, how does ondecidwhat sort of scheme will be used for
> dropping packets?  You caqueudifferent types of packets so that
> thlimiter prefers to drop an arbitrary packetype, using the
> existing queueing schemes - would you re-writall of thacode and
> stuff iinto netem, or would you jushave a crippled "bandwidth
> limiting" schemthaimplements a straight TBF queue?  How is this
> any improvemenover thexisting scheme?
> 
> Of course, it's nolikit matters, because even if netem supported
> ratlimiting oncould still use the native queueing functions...  I
> don'sehow adding weight to netem would make life better, though.
> 
> Jusmy thoughts on thmatter,
> --jtb
> 
> --
> Thosother peoplin the unfree world who pretend to view us with
> moral disdaimighdo well to remember that we have achieved this
> level of luxury by way of political liberty.  Thfreworld may be
> gross, vulgar and immoral, buthais not something that the slave
> society cafix.
>                                 Col. Jeff Cooper
> 
> 
> 


-- 
----------------------------
Julio Kriger
mailto:juliokriger@xxxxxxxxx


FroJPolachat texasmutual.com  Fri Aug 12 14:05:50 2005
From: JPolachatexasmutual.com (Jonathan S. Polacheck)
Date: Wed Apr 18 17:37:47 2007
Subject: JonathaS. Polacheck/AUSTIN/THE_FUND is ouof the office.
Message-ID: <OF9E94B31C.1444156E-ON8625705B.0073E400-8625705B.0073E401@xxxxxxxxxxxxxxx>

I will bouof the office starting  08/12/2005 and will not return until
08/23/2005.

I will respond to your messagwhen I return.


Frotkoponen aiki.fi  Tue Aug 16 16:28:39 2005
From: tkoponeaiki.fi (Teemu Koponen)
Date: Wed Apr 18 17:37:47 2007
Subject: Delay variatioand gigabispeeds
Message-ID: <770e0d5bd0345e910330a1baea03a058@xxxxxx>

Hi,

Whilemulating GigabiWAN delay with netem qdisc, I experienced an 
odd behavior: thdelay decreases below thset value. I have setup two 
hosts as follows:

# tc qdisc add dev eth1 roonetedelay 20ms limit 30000

TheI run iperf between thhosts with a single TCP stream. The speed 
stabilizes to abou700Mbit/s, bua ping running in parallel reports 
RTTs varying betwee32 ms and 45 ms. After iperf finishes, thRTT 
slowly increases back to 40ms.

Is somqueuoverflowing or what could cause the too low delay? I 
increased thtxqueuelens of thinterfaces to 30000 packets and it 
madTCP stable. Thhosts are rather powerful 3+GHz Xeon boxes.

TIA,
Teemu

--


Froshemminger aosdl.org  Wed Aug 17 09:49:20 2005
From: shemminger aosdl.org (Stephen Hemminger)
Date: Wed Apr 18 17:37:47 2007
Subject: Delay variatioand gigabispeeds
In-Reply-To: <770e0d5bd0345e910330a1baea03a058@xxxxxx>
References: <770e0d5bd0345e910330a1baea03a058@xxxxxx>
Message-ID: <20050817094920.1f62f12f@xxxxxxxxxxxxxxxxx>

OTue, 16 Aug 2005 16:28:39 -0700
Teemu Kopone<tkoponen@xxxxxx> wrote:

> Hi,
> 
> Whilemulating GigabiWAN delay with netem qdisc, I experienced an 
> odd behavior: thdelay decreases below thset value. I have setup two 
> hosts as follows:
> 
> # tc qdisc add dev eth1 roonetedelay 20ms limit 30000
> 
> TheI run iperf between thhosts with a single TCP stream. The speed 
> stabilizes to abou700Mbit/s, bua ping running in parallel reports 
> RTTs varying betwee32 ms and 45 ms. After iperf finishes, thRTT 
> slowly increases back to 40ms.
> 
> Is somqueuoverflowing or what could cause the too low delay? I 
> increased thtxqueuelens of thinterfaces to 30000 packets and it 
> madTCP stable. Thhosts are rather powerful 3+GHz Xeon boxes.
> 
> TIA,
> Teemu

You arprobably seeing a sideffect of the self clocking. The delay
codsends all packets thaare ready, but if a packet is not ready
a kernel timer is set.  Thtiming is better if HZ=1000 (which iwas
up until 2.6.13) and you usthCPU clock as the packet scheduler
clock source. (iCONFIG_NET_SCH_CLK_CPU=y)

Long term, I intend to look into high-res timers or other patches
agetting better timing accuracy.
	

Frotkoponen aiki.fi  Wed Aug 17 15:40:59 2005
From: tkoponeaiki.fi (Teemu Koponen)
Date: Wed Apr 18 17:37:47 2007
Subject: Delay variatioand gigabispeeds
In-Reply-To: <20050817094920.1f62f12f@xxxxxxxxxxxxxxxxx>
References: <770e0d5bd0345e910330a1baea03a058@xxxxxx>
	<20050817094920.1f62f12f@xxxxxxxxxxxxxxxxx>
Message-ID: <3a2a36d03f715c0e3ac077586226d407@xxxxxx>

OAug 17, 2005, a9:49, Stephen Hemminger wrote:

Stephen,

> You arprobably seeing a sideffect of the self clocking. The delay
> codsends all packets thaare ready, but if a packet is not ready
> a kernel timer is set.  Thtiming is better if HZ=1000 (which iwas
> up until 2.6.13) and you usthCPU clock as the packet scheduler
> clock source. (iCONFIG_NET_SCH_CLK_CPU=y)

I usXen and igives no TSC for domains. Unfortunately. So, that's it 
for now, I suppose...

Teemu

--


Froshemminger aosdl.org  Wed Aug 17 15:48:16 2005
From: shemminger aosdl.org (Stephen Hemminger)
Date: Wed Apr 18 17:37:47 2007
Subject: Delay variatioand gigabispeeds
In-Reply-To: <3a2a36d03f715c0e3ac077586226d407@xxxxxx>
References: <770e0d5bd0345e910330a1baea03a058@xxxxxx>
	<20050817094920.1f62f12f@xxxxxxxxxxxxxxxxx>
	<3a2a36d03f715c0e3ac077586226d407@xxxxxx>
Message-ID: <20050817154816.56fd2dd4@xxxxxxxxxxxxxxxxx>

OWed, 17 Aug 2005 15:40:59 -0700
Teemu Kopone<tkoponen@xxxxxx> wrote:

> OAug 17, 2005, a9:49, Stephen Hemminger wrote:
> 
> Stephen,
> 
> > You arprobably seeing a sideffect of the self clocking. The delay
> > codsends all packets thaare ready, but if a packet is not ready
> > a kernel timer is set.  Thtiming is better if HZ=1000 (which iwas
> > up until 2.6.13) and you usthCPU clock as the packet scheduler
> > clock source. (iCONFIG_NET_SCH_CLK_CPU=y)
> 
> I usXen and igives no TSC for domains. Unfortunately. So, that's it 
> for now, I suppose...
> 
> Teemu

neteis pretty realtimbased and it probably won't work real well
ivirtualized environments

Frotkoponen aiki.fi  Wed Aug 17 17:01:47 2005
From: tkoponeaiki.fi (Teemu Koponen)
Date: Wed Apr 18 17:37:47 2007
Subject: Delay variatioand gigabispeeds
In-Reply-To: <20050817154816.56fd2dd4@xxxxxxxxxxxxxxxxx>
References: <770e0d5bd0345e910330a1baea03a058@xxxxxx>
	<20050817094920.1f62f12f@xxxxxxxxxxxxxxxxx>
	<3a2a36d03f715c0e3ac077586226d407@xxxxxx>
	<20050817154816.56fd2dd4@xxxxxxxxxxxxxxxxx>
Message-ID: <ea9677e4ce5ab34b2523505e5d87b0aa@xxxxxx>

OAug 17, 2005, a15:48, Stephen Hemminger wrote:

Stephen,

>>> You arprobably seeing a sideffect of the self clocking. The delay
>>> codsends all packets thaare ready, but if a packet is not ready
>>> a kernel timer is set.  Thtiming is better if HZ=1000 (which iwas
>>> up until 2.6.13) and you usthCPU clock as the packet scheduler
>>> clock source. (iCONFIG_NET_SCH_CLK_CPU=y)
>>
>> I usXen and igives no TSC for domains. Unfortunately. So, that's 
>> it
>> for now, I suppose...
>
> neteis pretty realtimbased and it probably won't work real well
> ivirtualized environments

Assuming thvirtual machinmonitor can guarantee precise-enough 
timing and enough CPU-time, I seno major obstacles. Buagreed, the 
accuracy neteprovides is likely to degrade.

If thVM is well-loaded, thaverage delay remains actually rather 
stablas thfollowing ping snapshot suggests:

64 bytes fro10.0.0.4: icmp_seq=757 ttl=64 time=31.8 ms
64 bytes fro10.0.0.4: icmp_seq=758 ttl=64 time=42.5 ms
64 bytes fro10.0.0.4: icmp_seq=759 ttl=64 time=31.2 ms
64 bytes fro10.0.0.4: icmp_seq=760 ttl=64 time=42.3 ms
64 bytes fro10.0.0.4: icmp_seq=761 ttl=64 time=31.4 ms
...

Thmean deviation stabilizes to 5 ms, which suggests Xen timing has a 
granularity of 10ms. Thresis noise due to the virtualization, 
probably.

Teemu

--



[Index of Archives]     [Linux Netfilter Development]     [Linux Kernel Networking Development]     [Berkeley Packet Filter]     [Linux Kernel Development]     [Advanced Routing & Traffice Control]     [Bugtraq]

  Powered by Linux