Re: [Patch nf 3/3] xt_hashlimit: limit the max size of hashtable

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Cong Wang <xiyou.wangcong@xxxxxxxxx> wrote:
> On Fri, Jan 31, 2020 at 2:08 PM Florian Westphal <fw@xxxxxxxxx> wrote:
> >
> > Cong Wang <xiyou.wangcong@xxxxxxxxx> wrote:
> > > The user-specified hashtable size is unbound, this could
> > > easily lead to an OOM or a hung task as we hold the global
> > > mutex while allocating and initializing the new hashtable.
> > >
> > > The max value is derived from the max value when chosen by
> > > the kernel.
> > >
> > > Reported-and-tested-by: syzbot+adf6c6c2be1c3a718121@xxxxxxxxxxxxxxxxxxxxxxxxx
> > > Cc: Pablo Neira Ayuso <pablo@xxxxxxxxxxxxx>
> > > Cc: Jozsef Kadlecsik <kadlec@xxxxxxxxxxxxx>
> > > Cc: Florian Westphal <fw@xxxxxxxxx>
> > > Signed-off-by: Cong Wang <xiyou.wangcong@xxxxxxxxx>
> > > ---
> > >  net/netfilter/xt_hashlimit.c | 6 +++++-
> > >  1 file changed, 5 insertions(+), 1 deletion(-)
> > >
> > > diff --git a/net/netfilter/xt_hashlimit.c b/net/netfilter/xt_hashlimit.c
> > > index 57a2639bcc22..6327134c5886 100644
> > > --- a/net/netfilter/xt_hashlimit.c
> > > +++ b/net/netfilter/xt_hashlimit.c
> > > @@ -272,6 +272,8 @@ dsthash_free(struct xt_hashlimit_htable *ht, struct dsthash_ent *ent)
> > >  }
> > >  static void htable_gc(struct work_struct *work);
> > >
> > > +#define HASHLIMIT_MAX_SIZE 8192
> > > +
> > >  static int htable_create(struct net *net, struct hashlimit_cfg3 *cfg,
> > >                        const char *name, u_int8_t family,
> > >                        struct xt_hashlimit_htable **out_hinfo,
> > > @@ -290,7 +292,7 @@ static int htable_create(struct net *net, struct hashlimit_cfg3 *cfg,
> > >               size = (nr_pages << PAGE_SHIFT) / 16384 /
> > >                      sizeof(struct hlist_head);
> > >               if (nr_pages > 1024 * 1024 * 1024 / PAGE_SIZE)
> > > -                     size = 8192;
> > > +                     size = HASHLIMIT_MAX_SIZE;
> > >               if (size < 16)
> > >                       size = 16;
> > >       }
> > > @@ -848,6 +850,8 @@ static int hashlimit_mt_check_common(const struct xt_mtchk_param *par,
> > >
> > >       if (cfg->gc_interval == 0 || cfg->expire == 0)
> > >               return -EINVAL;
> > > +     if (cfg->size > HASHLIMIT_MAX_SIZE)
> > > +             return -ENOMEM;
> >
> > Hmm, won't that break restore of rulesets that have something like
> >
> > --hashlimit-size 10000?
> >
> > AFAIU this limits the module to vmalloc requests of only 64kbyte.
> > I'm not opposed to a limit (or a cap), but 64k seems a bit low to me.
> 
> 8192 is from the current code which handles kernel-chosen size
> (that is cfg->size==0), I personally have no idea what the max
> should be. :)

Me neither :-/

> Please suggest a number.

O would propose a max alloc size (hard limit) of ~8 MByte of vmalloc
space, or maybe 16 at most.

1048576 max upperlimit -> ~8mbyte vmalloc request -> allows to store
up to 2**23 entries.

In order to prevent breaking userspace, perhaps make it so that the
kernel caps cfg.max at twice that value?  Would allow storing up to
16777216 addresses with an average chain depth of 16 (which is quite
large).  We could increase the max limit in case someone presents a use
case.

What do you think?



[Index of Archives]     [Netfitler Users]     [Berkeley Packet Filter]     [LARTC]     [Bugtraq]     [Yosemite Forum]

  Powered by Linux