> 2024年10月31日 18:35,Thomas Gleixner <tglx@xxxxxxxxxxxxx> 写道: > > On Thu, Oct 31 2024 at 15:46, guanjun@xxxxxxxxxxxxxxxxx wrote: >> #ifdef CONFIG_SMP >> >> +static unsigned int __read_mostly managed_irqs_per_node; >> +static struct cpumask managed_irqs_cpumsk[MAX_NUMNODES] __cacheline_aligned_in_smp = { >> + [0 ... MAX_NUMNODES-1] = {CPU_BITS_ALL} >> +}; >> >> +static void __group_prepare_affinity(struct cpumask *premask, >> + cpumask_var_t *node_to_cpumask) >> +{ >> + nodemask_t nodemsk = NODE_MASK_NONE; >> + unsigned int ncpus, n; >> + >> + get_nodes_in_cpumask(node_to_cpumask, premask, &nodemsk); >> + >> + for_each_node_mask(n, nodemsk) { >> + cpumask_and(&managed_irqs_cpumsk[n], &managed_irqs_cpumsk[n], premask); >> + cpumask_and(&managed_irqs_cpumsk[n], &managed_irqs_cpumsk[n], node_to_cpumask[n]); > > How is this managed_irqs_cpumsk array protected against concurrency? My intention was to allocate up to `managed_irq_per_node` cpu bits from `managed_irqs_cpumask[n]`, even if another task modifies some of the bits in the `managed_irqs_cpumask[n]` at the same time. > >> + ncpus = cpumask_weight(&managed_irqs_cpumsk[n]); >> + if (ncpus < managed_irqs_per_node) { >> + /* Reset node n to current node cpumask */ >> + cpumask_copy(&managed_irqs_cpumsk[n], node_to_cpumask[n]); > > This whole logic is incomprehensible and aside of the concurrency > problem it's broken when CPUs are made present at run-time because these > cpu masks are static and represent the stale state of the last > invocation. Sorry, I realize there is indeed a logic issue here (caused by developing on 5.10 LTS and rebase to the latest linux-next). > > Given the limitations of the x86 vector space, which is not going away > anytime soon, there are only two options IMO to handle such a scenario. > > 1) Tell the nvme/block layer to disable queue affinity management > > 2) Restrict the devices and queues to the nodes they sit on I have tried fixing this issue through nvme driver, but later discovered that the same issue exists with virtio net. Therefore, I want to address this with a more general solution. Thanks, Guanjun > > Thanks, > > tglx