From: Alexei Starovoitov <alexei.starovoitov@xxxxxxxxx> Date: Mon, 23 Feb 2015 14:17:06 -0800 > I'm not sure all of these counting optimizations will help at the end. > Say we have a small table. A lot of inserts are coming in at the same > time. rhashtable_expand kicks in and all new inserts are going > into future table, while expansion is happening. > Since expand will kick in quickly the old table will not have long > chains per bucket, so few unzips and corresponding > synchronize_rcu and we're done with expand. > Now future table becomes the only table, but it still has a lot > of entries, since insertions were happening and this table has > long per bucket chains, so next expand will have a lot of > synchronize_rcu and will take very long time. > So whether we count while inserting or not and whether > we grow by 2x or grow by 8x we still have an underlying > problem of very large number of synchronize_rcu. > Malicious user that knows this can stall the whole system. > Please tell me I'm missing something. This is why I have just suggested that we make inserts block, and the expander looks at the count of pending inserts in an effort to keep expanding the table further if necessary before releasing the blocked insertion threads. -- To unsubscribe from this list: send the line "unsubscribe netfilter-devel" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html