On Fri, 4 Mar 2022 13:29:31 -0300 Marcelo Tosatti <mtosatti@xxxxxxxxxx> wrote: > > On systems that run FIFO:1 applications that busy loop > on isolated CPUs, executing tasks on such CPUs under > lower priority is undesired (since that will either > hang the system, or cause longer interruption to the > FIFO task due to execution of lower priority task > with very small sched slices). > > Commit d479960e44f27e0e52ba31b21740b703c538027c ("mm: disable LRU > pagevec during the migration temporarily") relies on > queueing work items on all online CPUs to ensure visibility > of lru_disable_count. > > However, its possible to use synchronize_rcu which will provide the same > guarantees (see comment this patch modifies on lru_cache_disable). > > Fixes: > > ... > > --- a/mm/swap.c > +++ b/mm/swap.c > @@ -831,8 +831,7 @@ inline void __lru_add_drain_all(bool force_all_cpus) > for_each_online_cpu(cpu) { > struct work_struct *work = &per_cpu(lru_add_drain_work, cpu); > > - if (force_all_cpus || > - pagevec_count(&per_cpu(lru_pvecs.lru_add, cpu)) || > + if (pagevec_count(&per_cpu(lru_pvecs.lru_add, cpu)) || Please changelog this alteration? > data_race(pagevec_count(&per_cpu(lru_rotate.pvec, cpu))) || > pagevec_count(&per_cpu(lru_pvecs.lru_deactivate_file, cpu)) || > pagevec_count(&per_cpu(lru_pvecs.lru_deactivate, cpu)) || > @@ -876,15 +875,21 @@ atomic_t lru_disable_count = ATOMIC_INIT(0); > void lru_cache_disable(void) > { > atomic_inc(&lru_disable_count); > -#ifdef CONFIG_SMP > /* > - * lru_add_drain_all in the force mode will schedule draining on > - * all online CPUs so any calls of lru_cache_disabled wrapped by > - * local_lock or preemption disabled would be ordered by that. > - * The atomic operation doesn't need to have stronger ordering > - * requirements because that is enforced by the scheduling > - * guarantees. > + * Readers of lru_disable_count are protected by either disabling > + * preemption or rcu_read_lock: > + * > + * preempt_disable, local_irq_disable [bh_lru_lock()] > + * rcu_read_lock [rt_spin_lock CONFIG_PREEMPT_RT] > + * preempt_disable [local_lock !CONFIG_PREEMPT_RT] > + * > + * Since v5.1 kernel, synchronize_rcu() is guaranteed to wait on > + * preempt_disable() regions of code. So any CPU which sees > + * lru_disable_count = 0 will have exited the critical > + * section when synchronize_rcu() returns. > */ > + synchronize_rcu(); > +#ifdef CONFIG_SMP > __lru_add_drain_all(true); > #else > lru_add_and_bh_lrus_drain();