On Tue, 21 Apr 2009 05:56:55 +0200 Eric Dumazet <dada1@xxxxxxxxxxxxx> wrote: > Lai Jiangshan a écrit : > > Stephen Hemminger wrote: > >> +/** > >> + * xt_table_info_rdlock_bh - recursive read lock for xt table info > >> + * > >> + * Table processing calls this to hold off any changes to table > >> + * (on current CPU). Always leaves with bottom half disabled. > >> + * If called recursively, then assumes bh/preempt already disabled. > >> + */ > >> +void xt_info_rdlock_bh(void) > >> +{ > >> + struct xt_info_lock *lock; > >> + > >> + preempt_disable(); > >> + lock = &__get_cpu_var(xt_info_locks); > >> + if (likely(++lock->depth == 0)) > > > > Maybe I missed something. I think softirq may be still enabled here. > > So what happen when xt_info_rdlock_bh() called recursively here? > > well, first time its called, you are right softirqs are enabled until > the point we call spin_lock_bh(), right after this line : > > > > > >> + spin_lock_bh(&lock->lock); > >> + preempt_enable_no_resched(); > > After this line, both softirqs and preempt are disabled. > > Future calls to this function temporarly raise preemptcount and decrease it. > (Null effect) > > >> +} > >> +EXPORT_SYMBOL_GPL(xt_info_rdlock_bh); > >> + > > > > Is this OK for you: > > > > void xt_info_rdlock_bh(void) > > { > > struct xt_info_lock *lock; > > > > local_bh_disable(); > > well, Stephen was trying to not change preempt count for the 2nd, 3rd, 4th?... invocation of this function. > This is how I understood the code. > > > lock = &__get_cpu_var(xt_info_locks); > > if (likely(++lock->depth == 0)) > > spin_lock(&lock->lock); > > } > > > > Lai. > > In this version, I was trying to use/preserve the optimizations that are done in spin_unlock_bh(). -- To unsubscribe from this list: send the line "unsubscribe netfilter-devel" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html