On 2025-02-21 14:31:40 [+0100], To netfilter-devel@xxxxxxxxxxxxxxx wrote: > The per-CPU xt_recseq is a custom netfilter seqcount. It provides > synchronisation for the replacement of the xt_table::private pointer and > ensures that the two counter in xt_counters are properly observed during > an update on 32bit architectures. xt_recseq also supports recursion. > > This construct is less than optimal on PREMPT_RT because the lack of an > associated lock (with the seqcount) can lead to a deadlock if a high > priority reader interrupts a writter. Also xt_recseq relies on locking > with BH-disable which becomes problematic if the lock, currently part of > local_bh_disable() on PREEMPT_RT, gets removed. > > This can be optimized unrelated to PREEMPT_RT: > - Use RCU for synchronisation. This means ipt_do_table() (and the two > other) access xt_table::private within a RCU section. > xt_replace_table() replaces the pointer with rcu_assign_pointer() and > uses synchronize_rcu() to wait until each reader left RCU section. > > - Use u64_stats_t for the statistics. The advantage here is that > u64_stats_sync which is use a seqcount is optimized away on 64bit > architectures. The increment becomes just an add, the read just a read > of the variable without a loop. On 32bit architectures the seqcount > remains but the scope is smaller. > > The struct xt_counters is defined in a user exported header (uapi). So > in patch #2 I tried to split the regular u64 access and the "internal > access" which treats the struct either as two counter or a per-CPU > pointer. In order not to expose u64_stats_t to userland I added a "pad" > which is cast to the internal type. I hoped that this makes it obvious > that a function like xt_get_this_cpu_counter() expects the possible > per-CPU type but mark_source_chains() or get_counters() expect the u64 > type without pointers. A gentle ping in case this got forgotten. Sebastian