Interesting, On Tue, Aug 5, 2008 at 2:02 AM, Rene Herman <rene.herman@xxxxxxxxxxxx> wrote: > On 04-08-08 17:36, Peter Teoh wrote: > >> I read about this sleeping spin lock: >> >> http://lwn.net/Articles/271817/ >> >> What is that? > > A marketing oxymoron in the same style as, say, "voluntary preemption". what is the difference between a moron and oxymoron? hahaha...just kidding. > > Ofcourse, "sleeping spinlocks" do not exist. Although adaptive spinlocks > which spin for a while before giving up and going to sleep might sort of > deserve the name, it's still no longer a spinlock if it goes to sleep (and > adaptive spinlocks might be the current -rt thing or not, I don't know). > >> I don't quite understand. Normal spin lock is poll-based, but >> sleeping spin lock is not, then how does it differed from mutex then? > > In principle, they do not. I've never looked at the RT stuff in detail but I > believe that in practice, it's specifically an _RT_ mutex, which features > priority inversion avoidance over regular mutexes. > With ref to Thomas Gleixner presentation: http://www.kernel.org/pub/linux/kernel/people/tglx/preempt-rt/rtlws2006.pdf page 8: a. What is the difference between rtmutex and normal mutex? Basically the features of these mutex is just an atomic check on some variables, and followed by looking the scheduling queues for any runnable tasks, correct? As for the difference, I suspect it is just to make mutex more preemptible, so as to reduce latency (at the cost of slower performance). Correct? > Note that the -rt kernel for example also makes interrupt handlers scheduled > entities (ie, they run alongside anything else and are just part of the > normal locking dance) so the locking rules and what you can and cannot do > change significantly under -rt. Understanding how and when these "sleeping > spinlocks" are safe requires a fuller digging into the specifics of -rt > therefore. Very interesting. Referring to page 8 above, "spinlock protected region become preemptible"....wow....now spinlocking is recursively callable. Now, referring to Linus' Documentation/spinlocks.txt (quoted below): The reasons you mustn't use these versions if you have interrupts that play with the spinlock is that you can get deadlocks: spin_lock(&lock);=====>in process context ... <- interrupt comes in: spin_lock(&lock);======>in interrupt context where an interrupt tries to lock an already locked variable. This is ok if the other interrupt happens on another CPU, but it is _not_ ok if the interrupt happens on the same CPU that already holds the lock, because the============> problem lies here. lock will obviously never be released (because the interrupt is waiting for the lock, and the lock-holder is interrupted by the interrupt and will not continue until the interrupt has been processed). ================================== So the reason why spinlock have to disable interrupt, are reasons as given above. But if now without interrupt disabled on the same CPU, then the scenario as described above will not happen as well, because, the 2nd spinlock processing in interrupt mode, will itself be interrupted, and CPU execution returned back to the process context. So no problem right? Just want to know if my logic has any bugs, thanks. > > If you do, be sure to post back the full story... ;-/ > > Rene. > -- Regards, Peter Teoh -- To unsubscribe from this list: send an email with "unsubscribe kernelnewbies" to ecartis@xxxxxxxxxxxx Please read the FAQ at http://kernelnewbies.org/FAQ