Hi Florian, thank you for your comments. On 1/3/19 6:31 PM, Florian Westphal wrote: > Jann Haber <jann.haber@xxxxxxxxxx> wrote: >> - flags interval: We have some maps (e.g. for SNAT), which are supposed >> to contain IPv4-nets ( >= /31) as well as single ips ( = /32) as keys. >> If we make one named map out of both (with flags interval), the single >> IPs are not matched (even if /32 is explicitly given). It is however >> possible to load the set (no syntax errors or such). > > Which kernel version is that? We are currently running on Debian stretch with the kernel from the backports: jannh@nat-2:~$ uname -a Linux nat-2 4.18.0-0.bpo.3-amd64 #1 SMP Debian 4.18.20-2~bpo9+1 (2018-12-08) x86_64 GNU/Linux We aim to update to buster soon, which is a 4.19.12 at the moment. Maybe that would solve the problem at hand. >> - counters: In our iptables setup, we use the counters to count the >> traffic of our users. We want to do the same in nftables. We therefore >> created a bunch of named counters (it's about 22k of them) and a map >> mapping a certain subnet/ip to the counter name. When we load the rules >> with "nft -f", there seems to be a delay of some seconds, where no more >> packets are processed. Since we do this frequently, these are frequent >> outages of our entire network and this is unacceptable for us. When we >> comment the counters and the map, the delay is gone. > > Could I ask why you need to re-load everything often? Well if we had just static counters, I agree that we should better only list/reset the counters to count the traffic. However, when new members register, we need to add some rules to our ruleset. Also our members can add a DNAT on our webpage to our database. The data from there is then translated to the nftables ruleset. A python script creates new ruleset every 15 minutes. It atomically reads the counters, flushes the ruleset and adds all the new rules (via nft -f). >> Any suggestions, if this can be improved or where we go wrong, that we >> experience this delay? > > The only major issue I see is that adding a lot of object doesn't scale > right now. Is there a better solution than adding each counter separately and then use a map to reference the counter names? I first wanted to use a counter as a verdict in a vmap, but this does not seem to work ... > I will get to it next week, adding 20k named counters should not be a > problem. I will have to clear with our data privacy officer, maybe I can make our full ruleset available to you for debugging :) Thank you, Jann