On 02/24/15 at 10:45am, josh@xxxxxxxxxxxxxxxx wrote: > On Tue, Feb 24, 2015 at 01:26:03PM -0500, David Miller wrote: > > Actually, first of all, let's not start with larger tables. > > > > The network namespace folks showed clearly that hash tables > > are detrimental to per-ns memory costs. So they definitely > > want us to start with extremely small tables. > > Agreed; ideally, the initial table size would just use a single page for > the array of bucket heads, which would give 1024 buckets on 32-bit > systems or 512 on 64-bit systems. That's more than enough for many > client systems, and for many single-application network namespaces. No objection at all. I certainly understand the implications to netns. After all this is the reason why rhashtable exists. However, the initial tabe size plus number of growth cycles has some implications on max number of bucket locks (see below). So it's a matter of balance that needs some thought and experimentation. > > But once we know something is actively used, sure, increase > > the table grow rate as a response to demand. > > > > So how feasible is it to grow by 4x, 8x, or other powers of > > two in one resize operation? > > Quite feasible. Actually, any integer multiple works fine, though I > think a power of two makes sense. I'd suggest trying 4x with the same > workloads that had an issue at 2x, and seeing how that goes. There is a side effect. We can't grow the number of bucket locks more than 2x if we grow the table itself faster than 2x. So if we start out with a table size of 512 and grow 4 times in a row we will end up with a theoretical max bucket locks of 4K. Probably enough though. -- To unsubscribe from this list: send the line "unsubscribe netfilter-devel" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html