On 02/07/2017 01:37 PM, Michal Hocko wrote:
> @@ -6711,7 +6714,16 @@ static int page_alloc_cpu_dead(unsigned int cpu)
> {
>
> lru_add_drain_cpu(cpu);
> +
> + /*
> + * A per-cpu drain via a workqueue from drain_all_pages can be
> + * rescheduled onto an unrelated CPU. That allows the hotplug
> + * operation and the drain to potentially race on the same
> + * CPU. Serialise hotplug versus drain using pcpu_drain_mutex
> + */
> + mutex_lock(&pcpu_drain_mutex);
> drain_pages(cpu);
> + mutex_unlock(&pcpu_drain_mutex);
You cannot put sleepable lock inside the preempt disbaled section...
We can make it a spinlock right?
Scratch that! For some reason I thought that cpu notifiers are run in an
atomic context. Now that I am checking the code again it turns out I was
wrong. __cpu_notify uses __raw_notifier_call_chain so this is not an
atomic context.
Good.
Anyway, shouldn't be it sufficient to disable preemption
on drain_local_pages_wq? The CPU hotplug callback will not preempt us
and so we cannot work on the same cpus, right?
I thought the problem here was that the callback races with the work item that
has been migrated to a different cpu. Once we are not working on the local cpu,
disabling preempt/irq's won't help?
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@xxxxxxxxx. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@xxxxxxxxx"> email@xxxxxxxxx </a>