On Fri, Jan 20, 2017 at 03:26:05PM +0100, Vlastimil Babka wrote: > > @@ -2392,8 +2404,24 @@ void drain_all_pages(struct zone *zone) > > else > > cpumask_clear_cpu(cpu, &cpus_with_pcps); > > } > > - on_each_cpu_mask(&cpus_with_pcps, (smp_call_func_t) drain_local_pages, > > - zone, 1); > > + > > + if (works) { > > + for_each_cpu(cpu, &cpus_with_pcps) { > > + struct work_struct *work = per_cpu_ptr(works, cpu); > > + INIT_WORK(work, drain_local_pages_wq); > > + schedule_work_on(cpu, work); > > This translates to queue_work_on(), which has the comment of "We queue > the work to a specific CPU, the caller must ensure it can't go away.", > so is this safe? lru_add_drain_all() uses get_online_cpus() around this. > get_online_cpus() would be required. > schedule_work_on() also uses the generic system_wq, while lru drain has > its own workqueue with WQ_MEM_RECLAIM so it seems that would be useful > here as well? > I would be reluctant to introduce a dedicated queue unless there was a definite case where an OOM occurred because pages were pinned on per-cpu lists and couldn't be drained because the buddy allocator was depleted. As it was, I thought the fallback case was excessively paranoid. > > + } > > + for_each_cpu(cpu, &cpus_with_pcps) > > + flush_work(per_cpu_ptr(works, cpu)); > > + } else { > > + for_each_cpu(cpu, &cpus_with_pcps) { > > + struct work_struct work; > > + > > + INIT_WORK(&work, drain_local_pages_wq); > > + schedule_work_on(cpu, &work); > > + flush_work(&work); > > Totally out of scope, but I wonder if schedule_on_each_cpu() could use > the same fallback that's here? > I'm not aware of a case where it really has been a problem. I only considered it here as the likely caller is in a context that is failing allocations. -- Mel Gorman SUSE Labs -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@xxxxxxxxx. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: <a href=mailto:"dont@xxxxxxxxx"> email@xxxxxxxxx </a>