Cong Wang <xiyou.wangcong@xxxxxxxxx> wrote: > The user-specified hashtable size is unbound, this could > easily lead to an OOM or a hung task as we hold the global > mutex while allocating and initializing the new hashtable. > > The max value is derived from the max value when chosen by > the kernel. > > Reported-and-tested-by: syzbot+adf6c6c2be1c3a718121@xxxxxxxxxxxxxxxxxxxxxxxxx > Cc: Pablo Neira Ayuso <pablo@xxxxxxxxxxxxx> > Cc: Jozsef Kadlecsik <kadlec@xxxxxxxxxxxxx> > Cc: Florian Westphal <fw@xxxxxxxxx> > Signed-off-by: Cong Wang <xiyou.wangcong@xxxxxxxxx> > --- > net/netfilter/xt_hashlimit.c | 6 +++++- > 1 file changed, 5 insertions(+), 1 deletion(-) > > diff --git a/net/netfilter/xt_hashlimit.c b/net/netfilter/xt_hashlimit.c > index 57a2639bcc22..6327134c5886 100644 > --- a/net/netfilter/xt_hashlimit.c > +++ b/net/netfilter/xt_hashlimit.c > @@ -272,6 +272,8 @@ dsthash_free(struct xt_hashlimit_htable *ht, struct dsthash_ent *ent) > } > static void htable_gc(struct work_struct *work); > > +#define HASHLIMIT_MAX_SIZE 8192 > + > static int htable_create(struct net *net, struct hashlimit_cfg3 *cfg, > const char *name, u_int8_t family, > struct xt_hashlimit_htable **out_hinfo, > @@ -290,7 +292,7 @@ static int htable_create(struct net *net, struct hashlimit_cfg3 *cfg, > size = (nr_pages << PAGE_SHIFT) / 16384 / > sizeof(struct hlist_head); > if (nr_pages > 1024 * 1024 * 1024 / PAGE_SIZE) > - size = 8192; > + size = HASHLIMIT_MAX_SIZE; > if (size < 16) > size = 16; > } > @@ -848,6 +850,8 @@ static int hashlimit_mt_check_common(const struct xt_mtchk_param *par, > > if (cfg->gc_interval == 0 || cfg->expire == 0) > return -EINVAL; > + if (cfg->size > HASHLIMIT_MAX_SIZE) > + return -ENOMEM; Hmm, won't that break restore of rulesets that have something like --hashlimit-size 10000? AFAIU this limits the module to vmalloc requests of only 64kbyte. I'm not opposed to a limit (or a cap), but 64k seems a bit low to me.