On Fri, 2013-04-26 at 08:45 -0700, Paul E. McKenney wrote: > I have done some crude coccinelle patterns in the past, but they are > subject to false positives (from when you transfer the pointer from > RCU protection to reference-count protection) and also false negatives > (when you atomically increment some statistic unrelated to protection). > > I could imagine maintaining a per-thread count of the number of outermost > RCU read-side critical sections at runtime, and then associating that > counter with a given pointer at rcu_dereference() time, but this would > require either compiler magic or an API for using a pointer returned > by rcu_dereference(). This API could in theory be enforced by sparse. > > Dhaval Giani might have some ideas as well, adding him to CC. We had this fix the otherday, because tcp prequeue code hit this check : static inline struct dst_entry *skb_dst(const struct sk_buff *skb) { /* If refdst was not refcounted, check we still are in a * rcu_read_lock section */ WARN_ON((skb->_skb_refdst & SKB_DST_NOREF) && !rcu_read_lock_held() && !rcu_read_lock_bh_held()); return (struct dst_entry *)(skb->_skb_refdst & SKB_DST_PTRMASK); } ( http://git.kernel.org/cgit/linux/kernel/git/davem/net.git/commit/?id=093162553c33e9479283e107b4431378271c735d ) Problem is the rcu protected pointer was escaping the rcu lock and was then used in another thread. What would be cool (but expensive maybe) , would be to get a cookie from rcu_read_lock(), and check the cookie at rcu_dereference(). These cookies would have system wide scope to catch any kind of errors. Because a per thread counter would not catch following problem : rcu_read_lock(); ptr = rcu_dereference(x); if (!ptr) return NULL; ... rcu_read_unlock(); ... rcu_read_lock(); /* no reload of x, ptr might be now stale/freed */ if (ptr->field) { ... } -- To unsubscribe from this list: send the line "unsubscribe netfilter-devel" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html