On Mon, 28 Jun 2021 22:02:41 -0400 "Neal P. Murphy" <neal.p.murphy@xxxxxxxxxxxx> wrote: > On Mon, 28 Jun 2021 10:43:10 +0100 > Kerin Millar <kfm@xxxxxxxxxxxxx> wrote: > > > Now you benefit from atomicity (the rules will either be committed at once, in full, or not at all) and proper error handling (the exit status value of iptables-restore is meaningful and acted upon). Further, should you prefer to indent the body of the heredoc, you may write <<-EOF, though only leading tab characters will be stripped out. > > > > [minor digression] > > Is iptables-restore truly atomic in *all* cases? Some years ago, I found through experimentation that some rules were 'lost' when restoring more than 25 000 rules. If I placed a COMMIT every 20 000 rules or so, then all rules would be properly loaded. I think COMMITs break atomicity. I tested with 100k to 1M rules. I was comparing the efficiency of iptables-restore with another tool that read from STDIN; the other tool was about 5% more efficient. I believe that you are correct on both counts; at least, as far as iptables-legacy-restore is concerned. My understanding is that iptables-nft-restore should not 'lose' rules because there is an initial parsing stage, after which it conveys the payload as a series of netlink packets to nftables, which is supposed to be immune to this issue. It would be nice to get a confirmation one way or the other from a Netfilter developer. Until now, I had thought that iptables-nft-restore addressed the issue of committing large batches of rules but, having just conducted a quick test, it does not appear to be the case. While I wasn't able to induce either utility to partially load a ruleset, I found that both fail outright upon pushing the number of rules beyond a certain threshold within a given chain. To my surprise, that threshold appears to be lower in the case of iptables-nft-restore. -- Kerin Millar