On Tue, Aug 13, 2013 at 07:44:55PM -0400, Chris Metcalf wrote: > int lru_add_drain_all(void) > { > static struct cpumask mask; Instead of cpumask, > static DEFINE_MUTEX(lock); you can DEFINE_PER_CPU(struct work_struct, ...). > for_each_online_cpu(cpu) { > if (pagevec_count(&per_cpu(lru_add_pvec, cpu)) || > pagevec_count(&per_cpu(lru_rotate_pvecs, cpu)) || > pagevec_count(&per_cpu(lru_deactivate_pvecs, cpu)) || > need_activate_page_drain(cpu)) > cpumask_set_cpu(cpu, &mask); and schedule the work items directly. > } > > rc = schedule_on_cpu_mask(lru_add_drain_per_cpu, &mask); Open coding flushing can be a bit bothersome but you can create a per-cpu workqueue and schedule work items on it and then flush the workqueue instead too. No matter how flushing is implemented, the path wouldn't have any memory allocation, which I thought was the topic of the thread, no? Thanks. -- tejun -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@xxxxxxxxx. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: <a href=mailto:"dont@xxxxxxxxx"> email@xxxxxxxxx </a>