Re: [RFC net-next 0/6] Cleanup IRQ affinity checks in several drivers

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Wed, 14 Aug 2024 13:12:08 +0100 Joe Damato wrote:
> Actually... how about a slightly different approach, which caches
> the affinity mask in the core?

I was gonna say :)

>   0. Extend napi struct to have a struct cpumask * field
> 
>   1. extend netif_napi_set_irq to:
>     a. store the IRQ number in the napi struct (as you suggested)
>     b. call irq_get_effective_affinity_mask to store the mask in the
>        napi struct
>     c. set up generic affinity_notify.notify and
>        affinity_notify.release callbacks to update the in core mask
>        when it changes

This part I'm not an export on.

>   2. add napi_affinity_no_change which now takes a napi_struct
> 
>   3. cleanup all 5 drivers:
>     a. add calls to netif_napi_set_irq for all 5 (I think no RTNL
>        is needed, so I think this would be straight forward?)
>     b. remove all affinity_mask caching code in 4 of 5 drivers
>     c. update all 5 drivers to call napi_affinity_no_change in poll
> 
> Then ... anyone who adds support for netif_napi_set_irq to their
> driver in the future gets automatic support in-core for
> caching/updating of the mask? And in the future netdev-genl could
> dump the mask since its in-core?
> 
> I'll mess around with that locally to see how it looks, but let me
> know if that sounds like a better overall approach.

Could we even handle this directly as part of __napi_poll(),
once the driver gives core all of the relevant pieces of information ?




[Index of Archives]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Photo]     [Yosemite News]     [Yosemite Photos]     [Linux Kernel]     [Linux SCSI]     [XFree86]

  Powered by Linux