On Tue, 2020-10-06 at 23:26 +0200, Thomas Gleixner wrote: > On Mon, Oct 05 2020 at 16:28, David Woodhouse wrote: > > From: David Woodhouse <dwmw@xxxxxxxxxxxx> > > > > This is the maximum possible set of CPUs which can be used. Use it > > to calculate the default affinity requested from __irq_alloc_descs() > > by first attempting to find the intersection with irq_default_affinity, > > or falling back to using just the max_affinity if the intersection > > would be empty. > > And why do we need that as yet another argument? > > This is an optional property of the irq domain, really and no caller has > any business with that. Because irq_domain_alloc_descs() doesn't actually *take* the domain as an argument. It's more of an internal function, which is only non- static because it's used from kernel/irq/ipi.c too for some reason. If we convert the IPI code to just call __irq_alloc_descs() directly, perhaps that we can actually make irq_domain_alloc_decs() static. > > int irq_domain_alloc_descs(int virq, unsigned int cnt, irq_hw_number_t hwirq, > > - int node, const struct irq_affinity_desc *affinity) > > + int node, const struct irq_affinity_desc *affinity, > > + const struct cpumask *max_affinity) > > { > > + cpumask_var_t default_affinity; > > unsigned int hint; > > + int i; > > + > > + /* Check requested per-IRQ affinities are in the possible range */ > > + if (affinity && max_affinity) { > > + for (i = 0; i < cnt; i++) > > + if (!cpumask_subset(&affinity[i].mask, max_affinity)) > > + return -EINVAL; > > https://lore.kernel.org/r/alpine.DEB.2.20.1701171956290.3645@nanos > > What is preventing the affinity spreading code from spreading the masks > out to unusable CPUs? The changelog is silent about that part. I'm coming to the conclusion that we should allow unusable CPUs to be specified at this point, just as we do offline CPUs. That's largely driven by the realisation that our x86_non_ir_cpumask is only going to contain online CPUs anyway, and hotplugged CPUs only get added to it as they are brought online. > > + /* > > + * Generate default affinity. Either the possible subset of > > + * irq_default_affinity if such a subset is non-empty, or fall > > + * back to the provided max_affinity if there is no intersection. > > .. > > + * And just a copy of irq_default_affinity in the > > + * !CONFIG_CPUMASK_OFFSTACK case. > > We know that already... > > > + */ > > + memset(&default_affinity, 0, sizeof(default_affinity)); > > Right, memset() before allocating is useful. The memset is because there's no cpumask_var_t initialiser that I can see. So cpumask_available() on the uninitialised 'default_affinity' variable might be true even in the OFFSTACK case. > > + if ((max_affinity && > > + !cpumask_subset(irq_default_affinity, max_affinity))) { > > + if (!alloc_cpumask_var(&default_affinity, GFP_KERNEL)) > > + return -ENOMEM; > > + cpumask_and(default_affinity, max_affinity, > > + irq_default_affinity); > > + if (cpumask_empty(default_affinity)) > > + cpumask_copy(default_affinity, max_affinity); > > + } else if (cpumask_available(default_affinity)) > > + cpumask_copy(default_affinity, irq_default_affinity); > > That's garbage and unreadable. That's why there was a comment explaining it... at which point you claimed to already know :) Clearly the comment didn't do the job it was supposed to do. I'll rework.
Attachment:
smime.p7s
Description: S/MIME cryptographic signature