nftables: Writers starve readers

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi!

I'm currently triaging a case where 'nft list ruleset' happens to take
more than 60s which causes trouble in the calling code. It is not
entirely clear what happens on the system that leads to this, so I'm
checking "suspicious" cases. One of them is "too many ruleset updates",
and indeed the following script is problematic:

| # init
| iptables-nft -N foo
| (
| 	echo "*filter";
| 	for ((i = 0; i < 100000; i++)); do
| 		echo "-A foo -m comment --comment \"rule $i\" -j ACCEPT"
| 	done
| 	echo "COMMIT"
| ) | iptables-nft-restore --noflush
| 
| # flood
| while true; do
| 	iptables-nft -A foo -j ACCEPT
| 	iptables-nft -D foo -j ACCEPT
| done

A call to 'nft list ruleset' in a second terminal hangs without output.
It apparently hangs in nft_cache_update() because rule_cache_dump()
returns EINTR. On kernel side, I guess it stems from
nl_dump_check_consistent() in __nf_tables_dump_rules(). I haven't
checked, but the generation counter likely increases while dumping the
100k rules.

One may deem this scenario unrealistic, but I had to insert a 'sleep 5'
into the while-loop to unblock 'nft list ruleset' again. A new rule
every 4s especially in such a large ruleset is not that unrealistic IMO.

I wonder if we can provide some fairness to readers? Ideally a reader
would just see the ruleset as it was when it started dumping, but
keeping a copy of the large ruleset is probably not feasible.

Cheers, Phil



[Index of Archives]     [Netfitler Users]     [Berkeley Packet Filter]     [LARTC]     [Bugtraq]     [Yosemite Forum]

  Powered by Linux