On Mon, Oct 30, 2023 at 03:51:04PM +0100, Valentin Schneider wrote: > Consider the following scenario under PREEMPT_RT: > o A CFS task p0 gets throttled while holding read_lock(&lock) > o A task p1 blocks on write_lock(&lock), making further readers enter the > slowpath > o A ktimers or ksoftirqd task blocks on read_lock(&lock) > > If the cfs_bandwidth.period_timer to replenish p0's runtime is enqueued on > the same CPU as one where ktimers/ksoftirqd is blocked on read_lock(&lock), > this creates a circular dependency. > > This has been observed to happen with: > o fs/eventpoll.c::ep->lock > o net/netlink/af_netlink.c::nl_table_lock (after hand-fixing the above) > but can trigger with any rwlock that can be acquired in both process and > softirq contexts. > > The linux-rt tree has had > 1ea50f9636f0 ("softirq: Use a dedicated thread for timer wakeups.") > which helped this scenario for non-rwlock locks by ensuring the throttled > task would get PI'd to FIFO1 (ktimers' default priority). Unfortunately, > rwlocks cannot sanely do PI as they allow multiple readers. > > Make the period_timer expire in hardirq context under PREEMPT_RT. The > callback for this timer can end up doing a lot of work, but this is > mitigated somewhat when using nohz_full / CPU isolation: the timers *are* > pinned, but on the CPUs the taskgroups are created on, which is usually > going to be HK CPUs. Moo... so I think 'people' have been pushing towards changing the bandwidth thing to only throttle on the return-to-user path. This solves the kernel side of the lock holder 'preemption' issue. I'm thinking working on that is saner than adding this O(n) cgroup loop to hard-irq context. Hmm?