David Laight <David.Laight@xxxxxxxxxx> wrote: > From: Florian Westphal > > Sent: 30 May 2017 10:38 > > > > Quoting Joe Stringer: > > If a user loads nf_conntrack_ftp, sends FTP traffic through a network > > namespace, destroys that namespace then unloads the FTP helper module, > > then the kernel will crash. > > > > Events that lead to the crash: > > 1. conntrack is created with ftp helper in netns x > > 2. This netns is destroyed > > 3. netns destruction is scheduled > > 4. netns destruction wq starts, removes netns from global list > > 5. ftp helper is unloaded, which resets all helpers of the conntracks > > via for_each_net() > > > > but because netns is already gone from list the for_each_net() loop > > doesn't include it, therefore all of these conntracks are unaffected. > > > > 6. helper module unload finishes > > 7. netns wq invokes destructor for rmmod'ed helper > > > ... > > void > > nf_ct_iterate_destroy(int (*iter)(struct nf_conn *i, void *data), void *data) > > @@ -1734,6 +1736,13 @@ nf_ct_iterate_destroy(int (*iter)(struct nf_conn *i, void *data), void *data) > > } > > rtnl_unlock(); > > > > + /* Need to wait for netns cleanup worker to finish, if its > > + * running -- it might have deleted a net namespace from > > + * the global list, so our __nf_ct_unconfirmed_destroy() might > > + * not have affected all namespaces. > > + */ > > + net_ns_barrier(); > > + > > A problem I see is that nothing obvious guarantees that the cleanup worker > has actually started. If it hasn't even started, the earlier for_each_net() has seen all net namespaces and we managed to clear helper extensions of all conntracks. Same in case it has finished already: netns cleanup work queue has free'd all the affected conntracks we might have missed. We only are in trouble if netns work queue is running concurrently: netns cleanup first removes net namespaces from the global list, so nf_ct_iterate_destroy might have missed these. -- To unsubscribe from this list: send the line "unsubscribe netfilter-devel" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html