Changli Gao wrote: > On Thu, Apr 15, 2010 at 11:35 PM, Patrick McHardy <kaber@xxxxxxxxx> wrote: >> Changli Gao wrote: >>> static int >>> nfqnl_rcv_nl_event(struct notifier_block *this, >>> unsigned long event, void *ptr) >>> { >>> struct netlink_notify *n = ptr; >>> >>> if (event == NETLINK_URELEASE && n->protocol == NETLINK_NETFILTER) { >>> int i; >>> >>> /* destroy all instances for this pid */ >>> spin_lock(&instances_lock); >>> for (i = 0; i < INSTANCE_BUCKETS; i++) { >>> struct hlist_node *tmp, *t2; >>> struct nfqnl_instance *inst; >>> struct hlist_head *head = &instance_table[i]; >>> >>> hlist_for_each_entry_safe(inst, tmp, t2, head, hlist) { >>> if ((n->net == &init_net) && >>> (n->pid == inst->peer_pid)) >>> __instance_destroy(inst); >>> } >>> } >>> spin_unlock(&instances_lock); >>> } >>> return NOTIFY_DONE; >>> } >>> >>> static struct notifier_block nfqnl_rtnl_notifier = { >>> .notifier_call = nfqnl_rcv_nl_event, >>> }; >>> >> Ah, right. So call nfnl_lock() or convert the spinlock to a >> mutex. >> > > We can't convert the spinlock to a mutex simply. The notifier chain is > an atomic notifier chain. Well, then use reference counting, redo the lookup, or whatever. Really, this isn't that hard. -- To unsubscribe from this list: send the line "unsubscribe netfilter-devel" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html