On Wed, May 08, 2024 at 04:08:20PM +0200, Florian Westphal wrote: > Sven Auhagen <sven.auhagen@xxxxxxxxxxxx> wrote: > > When the sets are larger I now always get an error: > > ./main.nft:13:1-26: Error: Could not process rule: Cannot allocate memory > > destroy table inet filter > > ^^^^^^^^^^^^^^^^^^^^^^^^^^ > > along with the kernel message > > percpu: allocation failed, size=16 align=8 atomic=1, atomic alloc failed, no space left > > This specific pcpu allocation failure aside, I think we need to reduce > memory waste with flush op. > > Flushing a set with 1m elements will need >100Mbyte worth of memory for > the delsetelem transactional log. > > The ratio of preamble to set_elem isn't great, we need 88 bytes for the > nft_trans struct and 24 bytes to store one set elem, i.e. 112 bytes per > to-be-deleted element. > > I'd say we should look into adding a del_setelem_many struct that stores > e.g. up to 20 elem_priv pointers. With such a ratio we could probably > get memory waste down to ~20 Mbytes for 1m element sets. You are right and also loading a large set like this takes a lot of time as well besides the memory.