The patch titled Subject: cgroup: use irqsave in cgroup_rstat_flush_locked(). has been added to the -mm tree. Its filename is cgroup-use-irqsave-in-cgroup_rstat_flush_locked.patch This patch should soon appear at https://ozlabs.org/~akpm/mmots/broken-out/cgroup-use-irqsave-in-cgroup_rstat_flush_locked.patch and later at https://ozlabs.org/~akpm/mmotm/broken-out/cgroup-use-irqsave-in-cgroup_rstat_flush_locked.patch Before you just go and hit "reply", please: a) Consider who else should be cc'ed b) Prefer to cc a suitable mailing list as well c) Ideally: find the original patch on the mailing list and do a reply-to-all to that, adding suitable additional cc's *** Remember to use Documentation/process/submit-checklist.rst when testing your code *** The -mm tree is included into linux-next and is updated there every 3-4 working days ------------------------------------------------------ From: Sebastian Andrzej Siewior <bigeasy@xxxxxxxxxxxxx> Subject: cgroup: use irqsave in cgroup_rstat_flush_locked(). All callers of cgroup_rstat_flush_locked() acquire cgroup_rstat_lock either with spin_lock_irq() or spin_lock_irqsave(). cgroup_rstat_flush_locked() itself acquires cgroup_rstat_cpu_lock which is a raw_spin_lock. This lock is also acquired in cgroup_rstat_updated() in IRQ context and therefore requires _irqsave() locking suffix in cgroup_rstat_flush_locked(). Since there is no difference between spin_lock_t and raw_spin_lock_t on !RT lockdep does not complain here. On RT lockdep complains because the interrupts were not disabled here and a deadlock is possible. Acquire the raw_spin_lock_t with disabled interrupts. Link: https://lkml.kernel.org/r/20220301122143.1521823-2-bigeasy@xxxxxxxxxxxxx Signed-off-by: Sebastian Andrzej Siewior <bigeasy@xxxxxxxxxxxxx> Cc: Johannes Weiner <hannes@xxxxxxxxxxx> Cc: Tejun Heo <tj@xxxxxxxxxx> Cc: Thomas Gleixner <tglx@xxxxxxxxxxxxx> Cc: Zefan Li <lizefan.x@xxxxxxxxxxxxx> Signed-off-by: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx> --- kernel/cgroup/rstat.c | 5 +++-- 1 file changed, 3 insertions(+), 2 deletions(-) --- a/kernel/cgroup/rstat.c~cgroup-use-irqsave-in-cgroup_rstat_flush_locked +++ a/kernel/cgroup/rstat.c @@ -153,8 +153,9 @@ static void cgroup_rstat_flush_locked(st raw_spinlock_t *cpu_lock = per_cpu_ptr(&cgroup_rstat_cpu_lock, cpu); struct cgroup *pos = NULL; + unsigned long flags; - raw_spin_lock(cpu_lock); + raw_spin_lock_irqsave(cpu_lock, flags); while ((pos = cgroup_rstat_cpu_pop_updated(pos, cgrp, cpu))) { struct cgroup_subsys_state *css; @@ -166,7 +167,7 @@ static void cgroup_rstat_flush_locked(st css->ss->css_rstat_flush(css, cpu); rcu_read_unlock(); } - raw_spin_unlock(cpu_lock); + raw_spin_unlock_irqrestore(cpu_lock, flags); /* if @may_sleep, play nice and yield if necessary */ if (may_sleep && (need_resched() || _ Patches currently in -mm which might be from bigeasy@xxxxxxxxxxxxx are mm-memcg-disable-threshold-event-handlers-on-preempt_rt.patch mm-memcg-protect-per-cpu-counter-by-disabling-preemption-on-preempt_rt-where-needed.patch mm-memcg-protect-memcg_stock-with-a-local_lock_t.patch mm-memcg-disable-migration-instead-of-preemption-in-drain_all_stock.patch cgroup-use-irqsave-in-cgroup_rstat_flush_locked.patch mm-workingset-replace-irq-off-check-with-a-lockdep-assert.patch