Hi Rafael, On Mon, Nov 27, 2023 at 08:57:43PM +0100, Rafael J. Wysocki wrote: > From: Rafael J. Wysocki <rafael.j.wysocki@xxxxxxxxx> > > In the current arrangement, all of the acpi_ev_sci_xrupt_handler() code > is run as an interrupt handler for the SCI, in interrupt context. Among > other things, this causes it to run with local interrupts off which > can be problematic if many GPEs are enabled and they are located in the > I/O address space, for example (because in that case local interrupts > will be off for the duration of all of the GPE hardware accesses carried > out while handling an SCI combined and that may be quite a bit of time > in extreme scenarios). > > However, there is no particular reason why the code in question really > needs to run in interrupt context and in particular, it has no specific > reason to run with local interrupts off. The only real requirement is > to prevent multiple instences of it from running in parallel with each > other, but that can be achieved regardless. > > For this reason, use request_threaded_irq() instead of request_irq() for > the ACPI SCI and pass IRQF_ONESHOT to it in flags to indicate that the > interrupt needs to be masked while its handling thread is running so as > to prevent it from re-triggering while it is being handled (and in > particular until the final handled/not handled outcome is determined). > > While at it, drop a redundant local variable from acpi_irq(). > > Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@xxxxxxxxx> > --- > > The code inspection and (necessarily limited) testing carried out by me > are good indications that this should just always work, but there may > be still some really odd platform configurations I'm overlooking, so if > you have a way to give it a go, please do so. Tried this on ADL-S and ADL-P systems that I have here and both work just fine with the patch applied. I can see SCI interrupt count increases in /proc/interrupts as expected. Did a couple of s2idle cycles too, all good. Tested-by: Mika Westerberg <mika.westerberg@xxxxxxxxxxxxxxx>