On Mon, Jun 07 2021 at 15:33, Shung-Hsi Yu wrote: > Most driver's IRQ spreading scheme is naive compared to the IRQ spreading > scheme introduced since IRQ subsystem rework, so it better to rely > request_irq() to spread IRQ out. > > However, drivers that care about performance enough also tends to try > allocating memory on the same NUMA node on which the IRQ handler will run. > For such driver to rely on request_irq() for IRQ spreading, we also need to > provide an interface to retrieve the CPU index after calling > request_irq(). So if you are interested in the resulting NUMA node, then why exposing a random CPU out of the affinity mask instead of exposing a function to retrieve the NUMA node? > +/** > + * irq_get_effective_cpu - Retrieve the effective CPU index > + * @irq: Target interrupt to retrieve effective CPU index > + * > + * When the effective affinity cpumask has multiple CPU toggled, it just > + * returns the first CPU in the cpumask. > + */ > +int irq_get_effective_cpu(unsigned int irq) > +{ > + struct irq_data *data = irq_get_irq_data(irq); This can be NULL. > + struct cpumask *m; > + > + m = irq_data_get_effective_affinity_mask(data); > + return cpumask_first(m); > +} Thanks, tglx