On Fri, Apr 26, 2013 at 08:45:47AM -0700, Paul E. McKenney wrote: > On Fri, Apr 26, 2013 at 10:03:13AM +0200, Peter Zijlstra wrote: > > On Fri, Apr 26, 2013 at 10:45:08AM +0900, Simon Horman wrote: > > > > > @@ -975,8 +975,7 @@ static void *ip_vs_conn_array(struct seq_file *seq, loff_t pos) > > > return cp; > > > } > > > } > > > - rcu_read_unlock(); > > > - rcu_read_lock(); > > > + cond_resched_rcu_lock(); > > > } > > > > > > While I agree with the sentiment I do find it a somewhat dangerous construct in > > that it might become far too easy to keep an RCU reference over this break and > > thus violate the RCU premise. > > > > Is there anything that can detect this? Sparse / cocinelle / smatch? If so it > > would be great to add this to these checkers. > > I have done some crude coccinelle patterns in the past, but they are > subject to false positives (from when you transfer the pointer from > RCU protection to reference-count protection) and also false negatives > (when you atomically increment some statistic unrelated to protection). > > I could imagine maintaining a per-thread count of the number of outermost > RCU read-side critical sections at runtime, and then associating that > counter with a given pointer at rcu_dereference() time, but this would > require either compiler magic or an API for using a pointer returned > by rcu_dereference(). This API could in theory be enforced by sparse. Luckily cond_resched_rcu_lock() will typically only occur within loops, and loops tend to be contained in a single sourcefile. This would suggest a simple static checker should be able to tell without too much magic right? All it needs to do is track pointers returned from rcu_dereference*() and see if they're used after cond_resched_rcu_lock(). Also, cond_resched_rcu_lock() will only drop a single level of RCU refs; so that should be easier still. -- To unsubscribe from this list: send the line "unsubscribe lvs-devel" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html