On 17.01, Herbert Xu wrote: > On Fri, Jan 16, 2015 at 09:31:56PM +0000, Patrick McHardy wrote: > > > > I'm tending towards deferring resize operations while dumps are in > > progress. Since we only allow dumps by root, it seems the worst > > thing that can happen is that we run using a non optimal hash, > > which is comparable to having a badly structured ruleset. > > BTW, the current rhashtable has a deficiency in that the insert > operation never fails. However, with delayed resizing, we must > allow insertion to fail or there can be too many insertions that > may overwhelm the hash table or even worse overflow the hash table > size counter. > > So in this scenario, a dump may cause insertion failures by delaying > the completion of the expansion. Resizing might also fail because of memory allocation problems, but I'd argue that its better to continue with a non-optimal sized table and retry later than to completely fail, at least unless the API user has explicitly requested this behaviour. As for the element counter, yeah, it should prevent overflow. In that case I agree that failing insertion is the easiest solution. -- To unsubscribe from this list: send the line "unsubscribe netfilter-devel" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html