On Fri, Apr 02, 2021 at 06:05:39PM +0900, Hector Martin wrote: > This is the root interrupt controller used on Apple ARM SoCs such as the > M1. This irqchip driver performs multiple functions: > > * Handles both IRQs and FIQs > > * Drives the AIC peripheral itself (which handles IRQs) > > * Dispatches FIQs to downstream hard-wired clients (currently the ARM > timer). > > * Implements a virtual IPI multiplexer to funnel multiple Linux IPIs > into a single hardware IPI > > Signed-off-by: Hector Martin <marcan@xxxxxxxxx> > --- > MAINTAINERS | 2 + > drivers/irqchip/Kconfig | 8 + > drivers/irqchip/Makefile | 1 + > drivers/irqchip/irq-apple-aic.c | 837 ++++++++++++++++++++++++++++++++ > include/linux/cpuhotplug.h | 1 + > 5 files changed, 849 insertions(+) > create mode 100644 drivers/irqchip/irq-apple-aic.c Couple of stale comment nits: > +static void aic_ipi_unmask(struct irq_data *d) > +{ > + struct aic_irq_chip *ic = irq_data_get_irq_chip_data(d); > + u32 irq_bit = BIT(irqd_to_hwirq(d)); > + > + atomic_or(irq_bit, this_cpu_ptr(&aic_vipi_enable)); > + > + /* > + * The atomic_or() above must complete before the atomic_read_acquire() below to avoid > + * racing aic_ipi_send_mask(). > + */ (the atomic_read_acquire() is now an atomic_read()) > + smp_mb__after_atomic(); > + > + /* > + * If a pending vIPI was unmasked, raise a HW IPI to ourselves. > + * No barriers needed here since this is a self-IPI. > + */ > + if (atomic_read(this_cpu_ptr(&aic_vipi_flag)) & irq_bit) > + aic_ic_write(ic, AIC_IPI_SEND, AIC_IPI_SEND_CPU(smp_processor_id())); > +} > + > +static void aic_ipi_send_mask(struct irq_data *d, const struct cpumask *mask) > +{ > + struct aic_irq_chip *ic = irq_data_get_irq_chip_data(d); > + u32 irq_bit = BIT(irqd_to_hwirq(d)); > + u32 send = 0; > + int cpu; > + unsigned long pending; > + > + for_each_cpu(cpu, mask) { > + /* > + * This sequence is the mirror of the one in aic_ipi_unmask(); > + * see the comment there. Additionally, release semantics > + * ensure that the vIPI flag set is ordered after any shared > + * memory accesses that precede it. This therefore also pairs > + * with the atomic_fetch_andnot in aic_handle_ipi(). > + */ > + pending = atomic_fetch_or_release(irq_bit, per_cpu_ptr(&aic_vipi_flag, cpu)); > + > + /* > + * The atomic_fetch_or_release() above must complete before the > + * atomic_read_acquire() below to avoid racing aic_ipi_unmask(). > + */ (same here) > + smp_mb__after_atomic(); > + > + if (!(pending & irq_bit) && > + (atomic_read(per_cpu_ptr(&aic_vipi_enable, cpu)) & irq_bit)) > + send |= AIC_IPI_SEND_CPU(cpu); > + } But with that: Acked-by: Will Deacon <will@xxxxxxxxxx> Will