[PATCH net-next v2 0/3] Replace xt_recseq with u64_stats.

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



The per-CPU xt_recseq is a custom netfilter seqcount. It provides
synchronisation for the replacement of the xt_table::private pointer and
ensures that the two counter in xt_counters are properly observed during
an update on 32bit architectures. xt_recseq also supports recursion.

This construct is less than optimal on PREMPT_RT because the lack of an
associated lock (with the seqcount) can lead to a deadlock if a high
priority reader interrupts a writter. Also xt_recseq relies on locking
with BH-disable which becomes problematic if the lock, currently part of
local_bh_disable() on PREEMPT_RT, gets removed.

This can be optimized unrelated to PREEMPT_RT:
- Use RCU for synchronisation. This means ipt_do_table() (and the two
  other) access xt_table::private within a RCU section.
  xt_replace_table() replaces the pointer with rcu_assign_pointer() and
  uses synchronize_rcu() to wait until each reader left RCU section.

- Use u64_stats_t for the statistics. The advantage here is that
  u64_stats_sync which is use a seqcount is optimized away on 64bit
  architectures. The increment becomes just an add, the read just a read
  of the variable without a loop. On 32bit architectures the seqcount
  remains but the scope is smaller.

The struct xt_counters is defined in a user exported header (uapi). So
in patch #2 I tried to split the regular u64 access and the "internal   
access" which treats the struct either as two counter or a per-CPU
pointer. In order not to expose u64_stats_t to userland I added a "pad"
which is cast to the internal type. I hoped that this makes it obvious
that a function like xt_get_this_cpu_counter() expects the possible
per-CPU type but mark_source_chains() or get_counters() expect the u64
type without pointers.

v1…v2 https://lore.kernel.org/all/20250216125135.3037967-1-bigeasy@xxxxxxxxxxxxx/
  - Updated kerneldoc in 2/3 so that the renamed parameter is part of
    it.
  - Updated description 1/3 in case there are complains regarding the
    synchronize_rcu(). The suggested course of action is to motivate
    people to move away from "legacy" towards "nft" tooling. Last resort
    is not to wait for the in-flight counter and just copy what is
    there.

Sebastian Andrzej Siewior (3):
  netfilter: Make xt_table::private RCU protected.
  netfilter: Split the xt_counters type between kernel and user.
  netfilter: Use u64_stats for counters in xt_counters_k.

 include/linux/netfilter/x_tables.h            | 113 +++++++-----------
 include/uapi/linux/netfilter/x_tables.h       |   4 +
 include/uapi/linux/netfilter_arp/arp_tables.h |   5 +-
 include/uapi/linux/netfilter_ipv4/ip_tables.h |   5 +-
 .../uapi/linux/netfilter_ipv6/ip6_tables.h    |   5 +-
 net/ipv4/netfilter/arp_tables.c               |  65 +++++-----
 net/ipv4/netfilter/ip_tables.c                |  65 +++++-----
 net/ipv6/netfilter/ip6_tables.c               |  65 +++++-----
 net/netfilter/x_tables.c                      |  79 ++++++------
 9 files changed, 192 insertions(+), 214 deletions(-)

-- 
2.47.2






[Index of Archives]     [Netfitler Users]     [Berkeley Packet Filter]     [LARTC]     [Bugtraq]     [Yosemite Forum]

  Powered by Linux