On Fri, 10 Feb 2023 09:56:03 +0000, Johan Hovold <johan@xxxxxxxxxx> wrote: > > On Thu, Feb 09, 2023 at 04:00:55PM +0000, Marc Zyngier wrote: > > On Thu, 09 Feb 2023 13:23:23 +0000, > > Johan Hovold <johan+linaro@xxxxxxxxxx> wrote: > > > > > > The IRQ domain structures are currently protected by the global > > > irq_domain_mutex. Switch to using more fine-grained per-domain locking, > > > which can speed up parallel probing by reducing lock contention. > > > > > > On a recent arm64 laptop, the total time spent waiting for the locks > > > during boot drops from 160 to 40 ms on average, while the maximum > > > aggregate wait time drops from 550 to 90 ms over ten runs for example. > > > > > > Note that the domain lock of the root domain (innermost domain) must be > > > used for hierarchical domains. For non-hierarchical domains (as for root > > > domains), the new root pointer is set to the domain itself so that > > > domain->root->mutex can be used in shared code paths. > > > > > > Also note that hierarchical domains should be constructed using > > > irq_domain_create_hierarchy() (or irq_domain_add_hierarchy()) to avoid > > > poking at irqdomain internals. As a safeguard, the lockdep assertion in > > > irq_domain_set_mapping() will catch any offenders that fail to set the > > > root domain pointer. > > > > > > Tested-by: Hsin-Yi Wang <hsinyi@xxxxxxxxxxxx> > > > Tested-by: Mark-PK Tsai <mark-pk.tsai@xxxxxxxxxxxx> > > > Signed-off-by: Johan Hovold <johan+linaro@xxxxxxxxxx> > > > --- > > > include/linux/irqdomain.h | 4 +++ > > > kernel/irq/irqdomain.c | 61 +++++++++++++++++++++++++-------------- > > > 2 files changed, 44 insertions(+), 21 deletions(-) > > > > > > diff --git a/include/linux/irqdomain.h b/include/linux/irqdomain.h > > > index 16399de00b48..cad47737a052 100644 > > > --- a/include/linux/irqdomain.h > > > +++ b/include/linux/irqdomain.h > > > @@ -125,6 +125,8 @@ struct irq_domain_chip_generic; > > > * core code. > > > * @flags: Per irq_domain flags > > > * @mapcount: The number of mapped interrupts > > > + * @mutex: Domain lock, hierarhical domains use root domain's lock > > > > nit: hierarchical > > > > > + * @root: Pointer to root domain, or containing structure if non-hierarchical > > > > @@ -226,6 +226,17 @@ struct irq_domain *__irq_domain_add(struct fwnode_handle *fwnode, unsigned int s > > > > > > domain->revmap_size = size; > > > > > > + /* > > > + * Hierarchical domains use the domain lock of the root domain > > > + * (innermost domain). > > > + * > > > + * For non-hierarchical domains (as for root domains), the root > > > + * pointer is set to the domain itself so that domain->root->mutex > > > + * can be used in shared code paths. > > > + */ > > > + mutex_init(&domain->mutex); > > > + domain->root = domain; > > > + > > > irq_domain_check_hierarchy(domain); > > > > > > mutex_lock(&irq_domain_mutex); > > > > @@ -518,7 +529,11 @@ static void irq_domain_set_mapping(struct irq_domain *domain, > > > irq_hw_number_t hwirq, > > > struct irq_data *irq_data) > > > { > > > - lockdep_assert_held(&irq_domain_mutex); > > > + /* > > > + * This also makes sure that all domains point to the same root when > > > + * called from irq_domain_insert_irq() for each domain in a hierarchy. > > > + */ > > > + lockdep_assert_held(&domain->root->mutex); > > > > > > if (irq_domain_is_nomap(domain)) > > > return; > > > @@ -540,7 +555,7 @@ static void irq_domain_disassociate(struct irq_domain *domain, unsigned int irq) > > > > > > hwirq = irq_data->hwirq; > > > > > > - mutex_lock(&irq_domain_mutex); > > > + mutex_lock(&domain->mutex); > > > > So you made that point about being able to uniformly using root>mutex, > > which I think is a good invariant. Yet you hardly make use of it. Why? > > I went back and forth over that a bit, but decided to only use > domain->root->mutex in paths that can be called for hierarchical > domains (i.e. the "shared code paths" mentioned above). > > Using it in paths that are clearly only called for non-hierarchical > domains where domain->root == domain felt a bit lazy. My concern here is that as this code gets further refactored, it may become much harder to reason about what is the correct level of locking. > The counter argument is of course that using domain->root->lock allows > people to think less about the code they are changing, but that's not > necessarily always a good thing. Eventually, non-hierarchical domains should simply die and be replaced with a single level hierarchy. Having a unified locking in place will definitely make the required work clearer. > Also note that the lockdep asserts in the revmap helpers would catch > anyone using domain->mutex where they should not (i.e. using > domain->mutex for an hierarchical domain). Lockdep is great, but lockdep is a runtime thing. It doesn't help reasoning about what gets locked when changing this code. > > > @@ -1132,6 +1147,7 @@ struct irq_domain *irq_domain_create_hierarchy(struct irq_domain *parent, > > > else > > > domain = irq_domain_create_tree(fwnode, ops, host_data); > > > if (domain) { > > > + domain->root = parent->root; > > > domain->parent = parent; > > > domain->flags |= flags; > > > > So we still have a bug here, as we have published a domain that we > > keep updating. A parallel probing could find it in the interval and do > > something completely wrong. > > Indeed we do, even if device links should make this harder to hit these > days. > > > Splitting the work would help, as per the following patch. > > Looks good to me. Do you want to submit that as a patch that I'll rebase > on or should I submit it as part of a v6? Just take it directly. Thanks, M. -- Without deviation from the norm, progress is not possible.