Stephen Hemminger a écrit : > This version of x_tables (ip/ip6/arp) locking uses a per-cpu > recursive lock that can be nested. It is sort of like existing kernel_lock, > rwlock_t and even old 2.4 brlock. > > "Reader" is ip/arp/ip6 tables rule processing which runs per-cpu. > It needs to ensure that the rules are not being changed while packet > is being processed. > > "Writer" is used in two cases: first is replacing rules in which case > all packets in flight have to be processed before rules are swapped, > then counters are read from the old (stale) info. Second case is where > counters need to be read on the fly, in this case all CPU's are blocked > from further rule processing until values are aggregated. > > The idea for this came from an earlier version done by Eric Dumazet. > Locking is done per-cpu, the fast path locks on the current cpu > and updates counters. This reduces the contention of a > single reader lock (in 2.6.29) without the delay of synchronize_net() > (in 2.6.30-rc2). > > The mutex that was added for 2.6.30 in xt_table is unnecessary since > there already is a mutex for xt[af].mutex that is held. > > Signed-off-by: Stephen Hemminger <shemminger@xxxxxxxxxx > > --- > Changes from earlier patches. > - function name changes > - disable bottom half in info_rdlock OK, but we still have a problem on machines with >= 250 cpus, because calling 250 times spin_lock() is going to overflow preempt_count, as each spin_lock() increases preempt_count by one. PREEMPT_MASK: 0x000000ff add_preempt_count() should warn us about this overflow if CONFIG_DEBUG_PREEMPT is set #ifdef CONFIG_DEBUG_PREEMPT /* * Spinlock count overflowing soon? */ DEBUG_LOCKS_WARN_ON((preempt_count() & PREEMPT_MASK) >= PREEMPT_MASK - 10); #endif My suggestion (in a previous mail) was to call preempt_disable() after each spin_lock(), and of course doing the reverse on unlock path. > +/** > + * xt_info_wrlock_bh - lock xt table info for update > + * > + * Locks out all readers, and blocks bottom half > + */ > +void xt_info_wrlock_bh(void) > +{ > + int i; > + > + local_bh_disable(); /* at this point , preemption is disabled... */ > + for_each_possible_cpu(i) { > + struct xt_info_lock *lock = &per_cpu(xt_info_locks, i); > + spin_lock(&lock->lock); preempt_enable(); /* avoid preempt count overflow */ > + BUG_ON(lock->depth != -1); > + } > +} > +EXPORT_SYMBOL_GPL(xt_info_wrlock_bh); > + > +/** > + * xt_info_wrunlock_bh - lock xt table info for update > + * > + * Unlocks all readers, and unblocks bottom half > + */ > +void xt_info_wrunlock_bh(void) __releases(&lock->lock) > +{ > + int i; > + > + for_each_possible_cpu(i) { > + struct xt_info_lock *lock = &per_cpu(xt_info_locks, i); > + BUG_ON(lock->depth != -1); preempt_disable() /* restore preempt count lowered in xt_info_wrlock_bh */ > + spin_unlock(&lock->lock); > + } > + local_bh_enable(); > +} > +EXPORT_SYMBOL_GPL(xt_info_wrunlock_bh); -- To unsubscribe from this list: send the line "unsubscribe netfilter-devel" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html