Akinobu Mita (on Wed, 25 Jan 2006 20:32:06 +0900) wrote: >o generic {,test_and_}{set,clear,change}_bit() (atomic bitops) ... >+static __inline__ void set_bit(int nr, volatile unsigned long *addr) >+{ >+ unsigned long mask = BITOP_MASK(nr); >+ unsigned long *p = ((unsigned long *)addr) + BITOP_WORD(nr); >+ unsigned long flags; >+ >+ _atomic_spin_lock_irqsave(p, flags); >+ *p |= mask; >+ _atomic_spin_unlock_irqrestore(p, flags); >+} Be very, very careful about using these generic *_bit() routines if the architecture supports non-maskable interrupts. NMI events can occur at any time, including when interrupts have been disabled by *_irqsave(). So you can get NMI events occurring while a *_bit fucntion is holding a spin lock. If the NMI handler also wants to do bit manipulation (and they do) then you can get a deadlock between the original caller of *_bit() and the NMI handler. Doing any work that requires spinlocks in an NMI handler is just asking for deadlock problems. The generic *_bit() routines add a hidden spinlock behind what was previously a safe operation. I would even say that any arch that supports any type of NMI event _must_ define its own bit routines that do not rely on your _atomic_spin_lock_irqsave() and its hash of spinlocks.