Re: General question about queuing on hardware devices

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Henrik Nordstrom wrote:

On Thu, 13 May 2004, Pantelis Antoniou wrote:


The thing is that I'm not interested in just absorbing the latency
without holes in the transmit path.


Please elaborate on what the difference is.



I have in mind something like fast-route. My main bulk
of traffic is not passing through the kernel. However
I'd like the local traffic to have priority over the switched traffic.

I'm targetting a bridging application (soft switch) and I'm trying to
minimize the latency of packets entering one port and leaving another.


How does keeping your hardware buffers more full than never empty benefits your hardware? Or how does your hardware device benefit from having the hardware transmit buffers always completely filled compared to reasonably filled before drained?

Is it the case that your hardware device processes the transmit buffers non-sequentially, effectively having more than one transmit queue you need to keep filled and each of these is too small to effectively keep your hardware working with the latency of the softirq?


It's not a question of keeping my buffers full.

Take a look at the diagram...
          +-------+----------+-------+
<--------> | PORT1 |          | PORT2 | <-------->
          +-------+          +-------+
          |         [LOCAL]          |
          +--------------------------+

I'd like to act as a switch for traffic between the ports
and have the local traffic always have a higher transmit priority.

I haven't yet implemented the switching driver to be sure just yet but
the problem is that the cpu is not very powerfull.

But I think under continuous traffic from the ports it is not possible
to queue local traffic infront of the port traffic.

I know it is not a very general case, so I'm asking if there's an API
that applies in my case.


The API is there on how your driver gets packets from the kernel, how you buffer, reorder etc within your driver is up to you.

If your hardware buffers is not full and the queue is running then there
is no latency involved as the transmit function is called immediately.

If your hardware buffers is or was very recently full then there is a
latency for the transmit queue to restart. Reducing this by introducing
yet another small buffer layer between netif transmit handler and your
hardware interrupt handler is not hard, but it may be hard to find a good
balance between overhead and buffer refill latency.


Agreed.

Otherwise I'm forced to duplicate or modify the netif scheduling core so
that I can call it from within an interrupt context.


I don't see why this would be needed. Why can't you just add an additional layer of buffering in your driver if the hardware buffers it not sufficient for what your hardware is doing?

You can not transmit a packet before it is available to the netif core,
and you can easily make a driver which accepts and queues packets for it's
interrupt handler when no hardware buffers are immediately available. But
as you then have two paths for filling your hardware buffers you better be
careful to mediate access properly, and also remember to not spend too
much time in the hardware IRQ as this will have adverse effects on all
other components in your system, possibly including but not limited to
your own ability to receive packets..

Regards
Henrik




Regards

Pantelis

-
: send the line "unsubscribe linux-net" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

[Index of Archives]     [Netdev]     [Ethernet Bridging]     [Linux 802.1Q VLAN]     [Linux Wireless]     [Kernel Newbies]     [Security]     [Linux for Hams]     [Netfilter]     [Git]     [Bugtraq]     [Yosemite News and Information]     [MIPS Linux]     [ARM Linux]     [Linux RAID]     [Linux PCI]     [Linux Admin]     [Samba]

  Powered by Linux