Hi, I'm currently trying to fix for an issue in Kubernetes realm[1]: Baseline is they are trying to restore a ruleset with ~700k lines and it fails. Needless to say, legacy iptables handles it just fine. Meanwhile I found out there's a limit of 1024 iovecs when submitting the batch to kernel, and this is what they're hitting. I can work around that limit by increasing each iovec (via BATCH_PAGE_SIZE) but keeping pace with legacy seems ridiculous: With a scripted binary-search I checked the maximum working number of restore items of: (1) User-defined chains (2) rules with merely comment match present (3) rules matching on saddr, daddr, iniface and outiface Here's legacy compared to nft with different factors in BATCH_PAGE_SIZE: legacy 32 (stock) 64 128 256 ---------------------------------------------------------------------- 1'636'799 1'602'202 - NC - - NC - - NC - 1'220'159 302'079 604'160 1'208'320 - NC - 3'532'040 242'688 485'376 971'776 1'944'576 At this point I stopped as the VM's 20GB RAM became the limit (iptables-nft-restore being OOM-killed instead of just failing). What would you suggest? Should I just change BATCH_PAGE_SIZE to make it "large enough" or is there a better approach? Cheers, Phil [1] https://github.com/kubernetes/kubernetes/issues/96018