possible delays in netif_rx

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,
	I have been enhancing our sync card driver to use DMA to copy a
received frame from the cards memory into a Kernel buffer.  We previously
used to use memcpy_fromio.  The card has 8 receive buffers, which are used
in turn.  I have been debugging a problem where sometimes my Kernel module
hasn't been able to keep up with the card.

	The line rate is 8Meg, and the packet size if 1500, so thats about
1.5 ms per frame.  So it would take 12 ms to run out of buffers.

I used an analyser for debugging and I soon found that from requesting the
DMA to completing the processing of the received frame was on average 280
us, but sometimes could take over 3 ms.  Homing this down a bit more, I
found that the call to netif_rx could take a variable amount of time.  This
was on average 5 us, but could sometimes be as much as 3 milliseconds.

This I think explains my problem, and I can see that I need to separate the
DMA complete processing from processing the received frame, including
passing it up the stack.  I guess I need to process the frame in a bottom
half.

However, I'm puzzled by what is happening in netif_rx.  It looks as though
it is just queuing it for later processing.  Can anyone explain what is
happening in netif_rx to cause this wide variation in execution time.  The
return from netif_rx is always NET_RX_SUCCESS.  The Kernel I'm currently
using is 2.4.17


Many Thanks

Kevin

-----Original Message-----
From: Carsten Langgaard [mailto:carstenl@mips.com]
Sent: 13 November 2002 13:35
To: Ralf Baechle; linux-mips@linux-mips.org; tsbogend@alpha.franken.de;
linux-net@vger.kernel.org; kevink@mips.com
Subject: BUG in the PCNET32 ethernet driver


I finally found the problem that caused a lot of problem with an
ethernet throughput test, that we have been running.
It turned out the problem is related to a bug in the PCNET32 driver,
when you are running it on a system that doesn't support hardware
coherency.

The problem is the way the AMD ethernet driver is using the PCI DMA
mapping routines.
When the driver releases a receive DMA buffer to the controller for
later DMA transfer it call the PCI DMA flushing routine as it should,
but it calls it with a length equal to 0. The driver is assuming that
the length field in the buffer structure is equal to the actual length
of the buffer, but this field is first updated when we are receiving the
packet (and call the skb_put function).

I have attached a patch, that solve this problem.
Please note that the patch is against Ralf Baechle latest linux_2_4
tree.

/Carsten



--
_    _ ____  ___   Carsten Langgaard   Mailto:carstenl@mips.com
|\  /|||___)(___   MIPS Denmark        Direct: +45 4486 5527
| \/ |||    ____)  Lautrupvang 4B      Switch: +45 4486 5555
  TECHNOLOGIES     2750 Ballerup       Fax...: +45 4486 5556
                   Denmark             http://www.mips.com


-
: send the line "unsubscribe linux-net" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

[Index of Archives]     [Netdev]     [Ethernet Bridging]     [Linux 802.1Q VLAN]     [Linux Wireless]     [Kernel Newbies]     [Security]     [Linux for Hams]     [Netfilter]     [Git]     [Bugtraq]     [Yosemite News and Information]     [MIPS Linux]     [ARM Linux]     [Linux RAID]     [Linux PCI]     [Linux Admin]     [Samba]

  Powered by Linux