Re: [RFC net-next 0/6] Cleanup IRQ affinity checks in several drivers

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Wed, Aug 14, 2024 at 08:14:48AM +0100, Joe Damato wrote:
> On Tue, Aug 13, 2024 at 05:17:10PM -0700, Jakub Kicinski wrote:
> > On Mon, 12 Aug 2024 14:56:21 +0000 Joe Damato wrote:
> > > Several drivers make a check in their napi poll functions to determine
> > > if the CPU affinity of the IRQ has changed. If it has, the napi poll
> > > function returns a value less than the budget to force polling mode to
> > > be disabled, so that it can be rescheduled on the correct CPU next time
> > > the softirq is raised.
> > 
> > Any reason not to use the irq number already stored in napi_struct ?
> 
> Thanks for taking a look.
> 
> IIUC, that's possible if i40e, iavf, and gve are updated to call
> netif_napi_set_irq first, which I could certainly do.
> 
> But as Stanislav points out, I would be adding a call to
> irq_get_effective_affinity_mask in the hot path where one did not
> exist before for 4 of 5 drivers.
> 
> In that case, it might make more sense to introduce:
> 
>   bool napi_affinity_no_change(const struct cpumask *aff_mask)
> 
> instead and the drivers which have a cached mask can pass it in and
> gve can be updated later to cache it.
> 
> Not sure how crucial avoiding the irq_get_effective_affinity_mask
> call is; I would guess maybe some driver owners would object to
> adding a new call in the hot path where one didn't exist before.
> 
> What do you think?

Actually... how about a slightly different approach, which caches
the affinity mask in the core?

  0. Extend napi struct to have a struct cpumask * field

  1. extend netif_napi_set_irq to:
    a. store the IRQ number in the napi struct (as you suggested)
    b. call irq_get_effective_affinity_mask to store the mask in the
       napi struct
    c. set up generic affinity_notify.notify and
       affinity_notify.release callbacks to update the in core mask
       when it changes

  2. add napi_affinity_no_change which now takes a napi_struct

  3. cleanup all 5 drivers:
    a. add calls to netif_napi_set_irq for all 5 (I think no RTNL
       is needed, so I think this would be straight forward?)
    b. remove all affinity_mask caching code in 4 of 5 drivers
    c. update all 5 drivers to call napi_affinity_no_change in poll

Then ... anyone who adds support for netif_napi_set_irq to their
driver in the future gets automatic support in-core for
caching/updating of the mask? And in the future netdev-genl could
dump the mask since its in-core?

I'll mess around with that locally to see how it looks, but let me
know if that sounds like a better overall approach.

- Joe




[Index of Archives]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Photo]     [Yosemite News]     [Yosemite Photos]     [Linux Kernel]     [Linux SCSI]     [XFree86]

  Powered by Linux