On PREEMPT_RT interrupts and preemption is always enabled. The locking function __memcg_stats_lock() always disabled preemptions. The recently added checks need to performed only on !PREEMPT_RT where preemption and disabled interrupts are used. Please fold into: "Protect per-CPU counter by disabling preemption on PREEMPT_RT" Signed-off-by: Sebastian Andrzej Siewior <bigeasy@xxxxxxxxxxxxx> --- Andrew, if this getting to confused at some point, I can fold it myself and repost the whole lot. Whatever works best for your. mm/memcontrol.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/mm/memcontrol.c b/mm/memcontrol.c index 73832cd1e9da4..63287fd03250b 100644 --- a/mm/memcontrol.c +++ b/mm/memcontrol.c @@ -741,7 +741,7 @@ void __mod_memcg_lruvec_state(struct lruvec *lruvec, enum node_stat_item idx, * interrupt context while other caller need to have disabled interrupt. */ __memcg_stats_lock(); - if (IS_ENABLED(CONFIG_DEBUG_VM)) { + if (IS_ENABLED(CONFIG_DEBUG_VM) && !IS_ENABLED(CONFIG_PREEMPT_RT)) { switch (idx) { case NR_ANON_MAPPED: case NR_FILE_MAPPED: -- 2.35.1