Re: [RFC net-next 0/5] Suspend IRQs during preferred busy poll

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 2024-08-19 22:07, Jakub Kicinski wrote:
On Tue, 13 Aug 2024 21:14:40 -0400 Martin Karsten wrote:
What about NIC interrupt coalescing. defer_hard_irqs_count was supposed
to be used with NICs which either don't have IRQ coalescing or have a
broken implementation. The timeout of 200usec should be perfectly within
range of what NICs can support.

If the NIC IRQ coalescing works, instead of adding a new timeout value
we could add a new deferral control (replacing defer_hard_irqs_count)
which would always kick in after seeing prefer_busy_poll() but also
not kick in if the busy poll harvested 0 packets.
Maybe I am missing something, but I believe this would have the same
problem that we describe for gro-timeout + defer-irq. When busy poll
does not harvest packets and the application thread is idle and goes to
sleep, it would then take up to 200 us to get the next interrupt. This
considerably increases tail latencies under low load.

In order get low latencies under low load, the NIC timeout would have to
be something like 20 us, but under high load the application thread will
be busy for longer than 20 us and the interrupt (and softirq) will come
too early and cause interference.

An FSM-like diagram would go a long way in clarifying things :)

I agree the suspend mechanism is not trivial and the implementation is subtle. It has frequently made our heads hurt while developing this. We will take a long hard look at our cover letter and produce other documentation to hopefully provide clear explanations.

It is tempting to think of the second timeout as 0 and in fact re-enable
interrupts right away. We have tried it, but it leads to a lot of
interrupts and corresponding inefficiencies, since a system below
capacity frequently switches between busy and idle. Using a small
timeout (20 us) for modest deferral and batching when idle is a lot more
efficient.

I see. I think we are on the same page. What I was suggesting is to use
the HW timer instead of the short timer. But I suspect the NIC you're
using isn't really good at clearing IRQs before unmasking. Meaning that
when you try to reactivate HW control there's already an IRQ pending
and it fires pointlessly. That matches my experience with mlx5.
If the NIC driver was to clear the IRQ state before running the NAPI
loop, we would have no pending IRQ by the time we unmask and activate
HW IRQs.

I believe there are additional issues. The problem is that the long timeout must engage if and only if prefer-busy is active.

When using NIC coalescing for the short timeout (without gro/defer), an interrupt after an idle period will trigger softirq, which will run napi polling. At this point, prefer-busy is not active, so NIC interrupts would be re-enabled. Then it is not possible for the longer timeout to interject to switch control back to polling. In other words, only by using the software timer for the short timeout, it is possible to extend the timeout without having to reprogram the NIC timer or reach down directly and disable interrupts.

Using gro_flush_timeout for the long timeout also has problems, for the same underlying reason. In the current napi implementation, gro_flush_timeout is not tied to prefer-busy. We'd either have to change that and in the process modify the existing deferral mechanism, or introduce a state variable to determine whether gro_flush_timeout is used as long timeout for irq suspend or whether it is used for its default purpose. In an earlier version, we did try something similar to the latter and made it work, but it ends up being a lot more convoluted than our current proposal.

Thanks,
Martin





[Index of Archives]     [Linux Ext4 Filesystem]     [Union Filesystem]     [Filesystem Testing]     [Ceph Users]     [Ecryptfs]     [NTFS 3]     [AutoFS]     [Kernel Newbies]     [Share Photos]     [Security]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux Cachefs]     [Reiser Filesystem]     [Linux RAID]     [NTFS 3]     [Samba]     [Device Mapper]     [CEPH Development]

  Powered by Linux