On Tue, 2022-02-22 at 11:47 -0300, Marcelo Tosatti wrote: > @@ -918,14 +917,23 @@ atomic_t lru_disable_count = ATOMIC_INIT > void lru_cache_disable(void) > { > atomic_inc(&lru_disable_count); > + synchronize_rcu(); > #ifdef CONFIG_SMP > /* > - * lru_add_drain_all in the force mode will schedule draining on > - * all online CPUs so any calls of lru_cache_disabled wrapped by > - * local_lock or preemption disabled would be ordered by that. > - * The atomic operation doesn't need to have stronger ordering > - * requirements because that is enforced by the scheduling > - * guarantees. > + * synchronize_rcu() waits for preemption disabled > + * and RCU read side critical sections > + * For the users of lru_disable_count: > + * > + * preempt_disable, local_irq_disable() [bh_lru_lock()] > + * rcu_read_lock [lru_pvecs CONFIG_PREEMPT_RT] > + * preempt_disable [lru_pvecs !CONFIG_PREEMPT_RT] > + * > + * > + * so any calls of lru_cache_disabled wrapped by > + * local_lock+rcu_read_lock or preemption disabled would be > + * ordered by that. The atomic operation doesn't need to have > + * stronger ordering requirements because that is enforced > + * by the scheduling guarantees. "The atomic operation doesn't need to have stronger ordering requirements because that is enforced by the scheduling guarantees." This is no longer needed. Regards, -- Nicolás Sáenz