Hi Jessica, On Mon, Jan 30, 2023 at 11:38:49PM +0000, Jessica Clarke wrote: > On 30 Jan 2023, at 18:22, Sunil V L <sunilvl@xxxxxxxxxxxxxxxx> wrote: > > > > Add support for initializing the RISC-V INTC driver on ACPI based > > platforms. > > > > Signed-off-by: Sunil V L <sunilvl@xxxxxxxxxxxxxxxx> > > --- > > drivers/irqchip/irq-riscv-intc.c | 79 +++++++++++++++++++++++++++----- > > 1 file changed, 67 insertions(+), 12 deletions(-) > > > > diff --git a/drivers/irqchip/irq-riscv-intc.c b/drivers/irqchip/irq-riscv-intc.c > > index f229e3e66387..044ec92fcba7 100644 > > --- a/drivers/irqchip/irq-riscv-intc.c > > +++ b/drivers/irqchip/irq-riscv-intc.c > > @@ -6,6 +6,7 @@ > > */ > > > > #define pr_fmt(fmt) "riscv-intc: " fmt > > +#include <linux/acpi.h> > > #include <linux/atomic.h> > > #include <linux/bits.h> > > #include <linux/cpu.h> > > @@ -112,6 +113,30 @@ static struct fwnode_handle *riscv_intc_hwnode(void) > > return intc_domain->fwnode; > > } > > > > +static int __init riscv_intc_init_common(struct fwnode_handle *fn) > > +{ > > + int rc; > > + > > + intc_domain = irq_domain_create_linear(fn, BITS_PER_LONG, > > + &riscv_intc_domain_ops, NULL); > > + if (!intc_domain) { > > + pr_err("unable to add IRQ domain\n"); > > + return -ENXIO; > > + } > > + > > + rc = set_handle_irq(&riscv_intc_irq); > > + if (rc) { > > + pr_err("failed to set irq handler\n"); > > + return rc; > > + } > > + > > + riscv_set_intc_hwnode_fn(riscv_intc_hwnode); > > + > > + pr_info("%d local interrupts mapped\n", BITS_PER_LONG); > > + > > + return 0; > > +} > > + > > static int __init riscv_intc_init(struct device_node *node, > > struct device_node *parent) > > { > > @@ -133,24 +158,54 @@ static int __init riscv_intc_init(struct device_node *node, > > if (riscv_hartid_to_cpuid(hartid) != smp_processor_id()) > > return 0; > > > > - intc_domain = irq_domain_add_linear(node, BITS_PER_LONG, > > - &riscv_intc_domain_ops, NULL); > > - if (!intc_domain) { > > - pr_err("unable to add IRQ domain\n"); > > - return -ENXIO; > > - } > > - > > - rc = set_handle_irq(&riscv_intc_irq); > > + rc = riscv_intc_init_common(of_node_to_fwnode(node)); > > if (rc) { > > - pr_err("failed to set irq handler\n"); > > + pr_err("failed to initialize INTC\n"); > > return rc; > > } > > > > - riscv_set_intc_hwnode_fn(riscv_intc_hwnode); > > + return 0; > > +} > > > > - pr_info("%d local interrupts mapped\n", BITS_PER_LONG); > > +IRQCHIP_DECLARE(riscv, "riscv,cpu-intc", riscv_intc_init); > > + > > +#ifdef CONFIG_ACPI > > + > > +static int __init > > +riscv_intc_acpi_init(union acpi_subtable_headers *header, > > + const unsigned long end) > > +{ > > + int rc; > > + struct fwnode_handle *fn; > > + struct acpi_madt_rintc *rintc; > > + > > + rintc = (struct acpi_madt_rintc *)header; > > + > > + /* > > + * The ACPI MADT will have one INTC for each CPU (or HART) > > + * so riscv_intc_acpi_init() function will be called once > > + * for each INTC. We only need to do INTC initialization > > + * for the INTC belonging to the boot CPU (or boot HART). > > + */ > > + if (riscv_hartid_to_cpuid(rintc->hart_id) != smp_processor_id()) > > + return 0; > > Why are we carrying forward this mess to ACPI? The DT bindings are > awful and a complete pain to deal with, as evidenced by how both Linux > and FreeBSD have to go out of their way to do special things to only > look at one of the many copies of the same thing. > Local interrupt controller structures are per-cpu in any architecture. So, there will be multiple such structures. It is upto the OS to choose one of them. What is the issue here? The RISC-V DT code is selecting the one which is corresponding to the boot cpu. While in ACPI we can choose any one, I think it is better to follow the DT code to keep it similar and boot cpu is always guaranteed to be available. Thanks! Sunil