On 2018-04-25 11:11:08 [+0200], Rafael J. Wysocki wrote: > On Wednesday, April 25, 2018 10:57:58 AM CEST Sebastian Andrzej Siewior wrote: > > On 2018-04-24 22:55:20 [+0200], Rafael J. Wysocki wrote: > > > > diff --git a/include/acpi/platform/aclinux.h b/include/acpi/platform/aclinux.h > > > > index a0b232703302..38eaa3235210 100644 > > > > --- a/include/acpi/platform/aclinux.h > > > > +++ b/include/acpi/platform/aclinux.h > > > > @@ -102,6 +102,7 @@ > > > > > > > > #define acpi_cache_t struct kmem_cache > > > > #define acpi_spinlock spinlock_t * > > > > +#define acpi_raw_spinlock raw_spinlock_t * > > > > > > I would prefer to redefine acpi_spinlock as raw_spinlock_t and then > > > acpi_os_acquire/release_lock() as > > > raw_spin_lock_irqsave/unlock_irqrestore(), respectively. > > > > I would rather not convert all current ACPI spinlock_t into > > raw_spinlock_t. Only those which are required to. > > I'm actually not 100% sure right now (as a rule things like this come up > when you are not expecting them :-)), but IIRC there are assumptions > regarding at least some of the ACPICA locks as being real spinlocks. Unless the caller acquired a rawlock or is calling from an IRQ-off region (even on -RT but an interrupt does not count because it is threaded (except a few special ones like timer)) a normal lock should do it. > They need to be reviewed from this angle and the code in question is > far from straightforward. If you have any questions / notes, I am all yours. > > I don't know if there is anything special about acpi_gbl_hardware_lock > > but we have other raw locks already (like erst_lock or c3_lock). > > I could come up with something like acpi_os_acquire_rawlock() if you > > prefer this instead. > > That might work, but the other OSes using ACPICA don't actually have a "raw > spinlock" concept, so this should just fall back to acpi_os_acquire_lock() > if the OS doesn't implement the "raw" thing separately. Okay. Sebastian -- To unsubscribe from this list: send the line "unsubscribe linux-acpi" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html