On Wed, Aug 14, 2024 at 07:03:35PM +0300, Shay Drori wrote: > > > On 14/08/2024 18:19, Joe Damato wrote: > > On Wed, Aug 14, 2024 at 08:09:15AM -0700, Jakub Kicinski wrote: > > > On Wed, 14 Aug 2024 13:12:08 +0100 Joe Damato wrote: > > > > Actually... how about a slightly different approach, which caches > > > > the affinity mask in the core? > > > > > > I was gonna say :) > > > > > > > 0. Extend napi struct to have a struct cpumask * field > > > > > > > > 1. extend netif_napi_set_irq to: > > > > a. store the IRQ number in the napi struct (as you suggested) > > > > b. call irq_get_effective_affinity_mask to store the mask in the > > > > napi struct > > > > c. set up generic affinity_notify.notify and > > > > affinity_notify.release callbacks to update the in core mask > > > > when it changes > > > > > > This part I'm not an export on. > > several net drivers (mlx5, mlx4, ice, ena and more) are using a feature > called ARFS (rmap)[1], and this feature is using the affinity notifier > mechanism. > Also, affinity notifier infra is supporting only a single notifier per > IRQ. > > Hence, your suggestion (1.c) will break the ARFS feature. > > [1] see irq_cpu_rmap_add() Thanks for taking a look and your reply. I did notice ARFS use by some drivers and figured that might be why the notifiers were being used in some cases. I guess the question comes down to whether adding a call to irq_get_effective_affinity_mask in the hot path is a bad idea. If it is, then the only option is to have the drivers pass in their IRQ affinity masks, as Stanislav suggested, to avoid adding that call to the hot path. If not, then the IRQ from napi_struct can be used and the affinity mask can be generated on every napi poll. i40e/gve/iavf would need calls to netif_napi_set_irq to set the IRQ mapping, which seems to be straightforward. In both cases: the IRQ notifier stuff would be left as is so that it wouldn't break ARFS. I suspect that the preferred solution would be to avoid adding that call to the hot path, right?