Re: Request for comments on dejittering app

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Steven Rostedt wrote:

John Sigler wrote:

Any comments so far?

Well, some NICs buffer writes, and some even buffer receives. So your NIC
may be buffering underneath you too.

Now that you mention it, I did notice that /proc/interrupts did not report the same numbers whether I send 1 packet every X ms OR N packets every N*X ms.

By the way, what does /proc/interrupts report? The number of IRQs that were actually serviced by the OS?

Now all this seems nice in theory, but I'd also like to make it work in
practice! (And I'm having some problems.)

R is typically 38 Mbit/s => B/R is typically 277 µs
Obviously, I'm going to need high-resolution timers if I want to sleep
for so small an interval.

As far as I understand, each packet received will generate one IRQ, and
each write to the PCI output device will generate one IRQ. This means
3600 IRQs per second from the NIC, and 3600 IRQs per second from the PCI
device. Is that reasonable? (Considering a 1267 MHz P3 with no IO-APIC.)

As mentioned above, the NIC may buffer too, so it may not be sending the
packets out immediately.  Usually, a driver would send out several packets
before sending a transmit IRQ. In fact, that transmit IRQ is usually so
to tell the driver that it can write more after a certain threshold has
been reached. (It's been several years since I wrote a NIC driver, so I'm
going off of my old memory ;-)

Is /proc/interrupts a good way to tell whether the NIC is grouping transmits before generating an IRQ?

It is easy to reduce the number of IRQs from the PCI device by grouping
several packets for a single write. It might be worthwhile.

May not be needed if the driver already does it.

One problem I have is that the PCI device's driver blocks until the
device has acknowledged the data, and the write operation sometimes
blocks for 200, 300, even 400 µs (I have not been able to tell why).

Not sure what you mean by saying the driver "blocks". Is it a sleeping
thread (in -rt) or from an interrupt?  Can you be more specific here.

(I may have several misconceptions.) In the case of network drivers, when a user space process calls send, the driver copies the user data, and schedules the operation to happen "sometime later".

But it "feels" like my PCI device's driver didn't copy the user space data, and instead programmed the DMA directly from the user space buffer, which means it has to put the user space process to sleep until the operation is complete, which sometimes takes 100s of µs.

Or perhaps, there are only a few buffers (I've read something about ping pong DMA in the data sheet) in the driver, and because I'm writing only small amounts of data, they are sometimes all full, because some DMA operations are taking longer than usual.

The driver source code is available here:
http://dektec.com/Products/LinuxSDK/Downloads/LinuxSDK.zip

I will take a hard look at it when I come back at the office.

I might not need -rt, if I'm willing to handle several packets every
time I wake up?

For any reaction time less than 100 us (or maybe even higher), you will
need -rt.

I'm still unsure whether there is a place in my app where I need a quick reaction time. What's your opinion?

Regards.
-
To unsubscribe from this list: send the line "unsubscribe linux-rt-users" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html

[Index of Archives]     [RT Stable]     [Kernel Newbies]     [IDE]     [Security]     [Git]     [Netfilter]     [Bugtraq]     [Yosemite]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Linux ATA RAID]     [Samba]     [Video 4 Linux]     [Device Mapper]

  Powered by Linux