On Fri, Apr 15, 2011 at 12:46:29PM -0400, Mathieu Desnoyers wrote: > * Huang Ying (ying.huang@xxxxxxxxx) wrote: > > On 04/14/2011 05:07 AM, Mathieu Desnoyers wrote: > > > * Huang Ying (ying.huang@xxxxxxxxx) wrote: > > > [...] > > >> + * rcu_read_lock and rcu_read_unlock is not used int gen_pool_alloc, > > >> + * gen_pool_free, gen_pool_avail and gen_pool_size etc, because chunks > > >> + * are only added into pool, not deleted from pool unless the pool > > >> + * itself is destroyed. If chunk will be deleted from pool, > > >> + * rcu_read_lock and rcu_read_unlock should be uses in these > > >> + * functions. > > > > > > So how do you protect between pool destruction and adding chunks into > > > the pool ? > > > > Because the pool itself will be freed when destruction, we need some > > mechanism outside of pool. For example, if gen_pool_add() is called via > > device file IOCTL, we must un-register the device file first, and > > destroy the pool after the last reference to device has gone. > > I am concerned about the list_for_each_entry_rcu() (and thus > rcu_dereference_raw()) used outside of rcu_read_lock/unlock pairs. > Validation infrastructure as recently been added to RCU: it triggers > warnings when these situations are encountered in some RCU debugging > configurations. The case of RCU list iteration is not covered by the > checks, but it would make sense to be aware of it. > > So although it seems like your code does not require rcu read lock > critical sections, I'd prefer to let Paul McKenney have a look. As long as you add elements and never remove them, then you can get away with using list_for_each_entry_rcu() outside of an RCU read-side critical section. But please comment this -- it is all too easy for someone to decide later to start deleting elements without also inserting the needed rcu_read_lock() and rcu_read_unlock() pairs. But I have lost the thread -- what code am I supposed to look at? Thanx, Paul -- To unsubscribe from this list: send the line "unsubscribe linux-acpi" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html