On Tue, Jan 08, 2019 at 11:51:00PM +0100, Mikhail Morfikov wrote: > Should nftables consume 500M+ of RAM while applying rules? > > I include some other file in the main file via: > > include "./sets/nft_set-bt_level1.nft" > > The set is ~7M in size: > > # ls -alh sets/nft_set-bt_level1.nft > -rw-r--r-- 1 root root 7.2M 2019-01-07 17:26:17 sets/nft_set-bt_level1.nft > > And the file content looks like this: > > ---------------------------------- > #!/usr/bin/nft -f > > define bt_level1 = { > 1.2.4.0-1.2.4.255, > 1.2.8.0-1.2.8.255, > 1.9.96.105, > .... > 223.255.177.196, > 223.255.241.132, > } > > add set ip raw-set bt_level1 { type ipv4_addr; flags interval; auto-merge; elements = $bt_level1 } > ---------------------------------- > > The process of loading the rules takes around 5s, but ps_mem shows something like this: > > # ps_mem| grep nft > 473.5 MiB + 192.5 KiB = 473.7 MiB nft > > Is that normal? May I get your ruleset in private to reproduce this here? Or a reproducer? We had in the past some issues with calls to libgmp allocating too large bitmaps, probably not all of the are sorted out. Thanks.