Re: Ottawa and slow hash-table resize

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Mon, Feb 23, 2015 at 2:34 PM, David Miller <davem@xxxxxxxxxxxxx> wrote:
> From: Alexei Starovoitov <alexei.starovoitov@xxxxxxxxx>
> Date: Mon, 23 Feb 2015 14:17:06 -0800
>
>> I'm not sure all of these counting optimizations will help at the end.
>> Say we have a small table. A lot of inserts are coming in at the same
>> time. rhashtable_expand kicks in and all new inserts are going
>> into future table, while expansion is happening.
>> Since expand will kick in quickly the old table will not have long
>> chains per bucket, so few unzips and corresponding
>> synchronize_rcu and we're done with expand.
>> Now future table becomes the only table, but it still has a lot
>> of entries, since insertions were happening and this table has
>> long per bucket chains, so next expand will have a lot of
>> synchronize_rcu and will take very long time.
>> So whether we count while inserting or not and whether
>> we grow by 2x or grow by 8x we still have an underlying
>> problem of very large number of synchronize_rcu.
>> Malicious user that knows this can stall the whole system.
>> Please tell me I'm missing something.
>
> This is why I have just suggested that we make inserts block,
> and the expander looks at the count of pending inserts in an
> effort to keep expanding the table further if necessary before
> releasing the blocked insertion threads.

yes. blocking inserts would solve the problem, but it will make
rhashtable applicability to be limited.
Also 'count of pending inserts' is going to be very small and
meaningless, since it will count the number of threads that
are sleeping on insert and not very useful to predict future
expansions.
As an alternative we can store two chains of pointers
per element (one chain for old table and another chain
for future table). That will avoid all unzipping logic at the cost
of extra memory. May be there are other solutions.
I think as a minimum something like 'blocking inserts' are
needed for stable. imo the current situation with nft_hash
is not just annoying, it's a bug.
--
To unsubscribe from this list: send the line "unsubscribe netfilter-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html




[Index of Archives]     [Netfitler Users]     [LARTC]     [Bugtraq]     [Yosemite Forum]

  Powered by Linux