Re: [RFC net-next 0/6] Cleanup IRQ affinity checks in several drivers

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 





On 14/08/2024 18:19, Joe Damato wrote:
On Wed, Aug 14, 2024 at 08:09:15AM -0700, Jakub Kicinski wrote:
On Wed, 14 Aug 2024 13:12:08 +0100 Joe Damato wrote:
Actually... how about a slightly different approach, which caches
the affinity mask in the core?

I was gonna say :)

   0. Extend napi struct to have a struct cpumask * field

   1. extend netif_napi_set_irq to:
     a. store the IRQ number in the napi struct (as you suggested)
     b. call irq_get_effective_affinity_mask to store the mask in the
        napi struct
     c. set up generic affinity_notify.notify and
        affinity_notify.release callbacks to update the in core mask
        when it changes

This part I'm not an export on.

several net drivers (mlx5, mlx4, ice, ena and more) are using a feature
called ARFS (rmap)[1], and this feature is using the affinity notifier
mechanism.
Also, affinity notifier infra is supporting only a single notifier per
IRQ.

Hence, your suggestion (1.c) will break the ARFS feature.

[1] see irq_cpu_rmap_add()


   2. add napi_affinity_no_change which now takes a napi_struct

   3. cleanup all 5 drivers:
     a. add calls to netif_napi_set_irq for all 5 (I think no RTNL
        is needed, so I think this would be straight forward?)
     b. remove all affinity_mask caching code in 4 of 5 drivers
     c. update all 5 drivers to call napi_affinity_no_change in poll

Then ... anyone who adds support for netif_napi_set_irq to their
driver in the future gets automatic support in-core for
caching/updating of the mask? And in the future netdev-genl could
dump the mask since its in-core?

I'll mess around with that locally to see how it looks, but let me
know if that sounds like a better overall approach.

I ended up going with the approach laid out above; moving the IRQ
affinity mask updating code into the core (which adds that ability
to gve/mlx4/mlx5... it seems mlx4/5 cached but didn't have notifiers
setup to update the cached copy?)


maybe This is probably due to what I wrote above..


and adding calls to
netif_napi_set_irq in i40e/iavf and deleting their custom notifier
code.

It's almost ready for rfcv2; I think this approach is probably
better ?

Could we even handle this directly as part of __napi_poll(),
once the driver gives core all of the relevant pieces of information ?

I had been thinking the same thing, too, but it seems like at least
one driver (mlx5) counts the number of affinity changes to export as
a stat, so moving all of this to core would break that.

So, I may avoid attempting that for this series.

I'm still messing around with this but will send an rfcv2 in a bit.





[Index of Archives]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Photo]     [Yosemite News]     [Yosemite Photos]     [Linux Kernel]     [Linux SCSI]     [XFree86]

  Powered by Linux