> > means to lock "&ohci->lock" when doing "spin_lock(&ohci->lock);" but > > will treat it as locking the abstractly-named "lock" context, while on > > UP/no-preempt the "spin_lock" macro is expanded by the preprocessor and > > you will get "&ohci->lock" as the expression. > > Yeah, well the lock being acquired or released *IS* "ohci->lock", > but the spinlock calls don't take the lock, they take pointers > to it! > > If you were to argue that understanding pointers like that is a > lot to demand of "sparse", I might agree. But that won't change > the fact that locks themselves are not pointers to locks. ;) Yeah, I agree the locks aren't pointers, but really, it is a lot easier for you to do that annotation than for sparse to figure it out. I blame these problems on whoever implemented sparse context tracking in the first place, documented the optional first expression parameter and then didn't write any code for looking at the expression at all. The patch series I have posted a while ago (I'll repost) would fix the existing problems with the context tracking patch Josh put in (especially the problem that spin_lock_irqsave(&a); spin_unlock_irqrestore(&a); currently gives a warning on some kernel builds because one is a macro and the other is not), but has its own problems again with the RESULT "macro" (you can only use it on a declaration, not an implementation), but that is completely new so not really a problem I think. A problem my patch series introduces, though, is the static inline one: Currently, static inlines are inlined right into the function when testing for locks but that is a bit complicated with expression tracking because the expressions inside of the static inline look completely different due to variable passing. Hence, I change that to not go in there but evaluate the static inlines themselves. However, this means that this static inline void netif_tx_lock_bh(struct net_device *dev) { spin_lock_bh(&dev->_xmit_lock); dev->xmit_lock_owner = smp_processor_id(); } will now trigger a warning and you have to annotate inlines as well (also to get the user checked properly): static inline void netif_tx_lock_bh(struct net_device *dev) __acquires(&dev->_xmit_lock) { spin_lock_bh(&dev->_xmit_lock); dev->xmit_lock_owner = smp_processor_id(); } Conceptually, I think this is cleaner because inlines are treated more like real functions then, but you may disagree. The status quo, however, has turned out to be unacceptable to users because of the spinlock problem I mentioned above. Can we, based on an evaluation of the problems I just outlined, either apply the patches I'm about to send, or revert them to go back to the completely unchecked contexts? johannes
Attachment:
signature.asc
Description: This is a digitally signed message part