This is a note to let you know that I've just added the patch titled workqueue: restore WQ_UNBOUND/max_active==1 to be ordered to the 4.9-stable tree which can be found at: http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=summary The filename of the patch is: workqueue-restore-wq_unbound-max_active-1-to-be-ordered.patch and it can be found in the queue-4.9 subdirectory. If you, or anyone else, feels it should not be added to the stable tree, please let <stable@xxxxxxxxxxxxxxx> know about it. >From 5c0338c68706be53b3dc472e4308961c36e4ece1 Mon Sep 17 00:00:00 2001 From: Tejun Heo <tj@xxxxxxxxxx> Date: Tue, 18 Jul 2017 18:41:52 -0400 Subject: workqueue: restore WQ_UNBOUND/max_active==1 to be ordered From: Tejun Heo <tj@xxxxxxxxxx> commit 5c0338c68706be53b3dc472e4308961c36e4ece1 upstream. The combination of WQ_UNBOUND and max_active == 1 used to imply ordered execution. After NUMA affinity 4c16bd327c74 ("workqueue: implement NUMA affinity for unbound workqueues"), this is no longer true due to per-node worker pools. While the right way to create an ordered workqueue is alloc_ordered_workqueue(), the documentation has been misleading for a long time and people do use WQ_UNBOUND and max_active == 1 for ordered workqueues which can lead to subtle bugs which are very difficult to trigger. It's unlikely that we'd see noticeable performance impact by enforcing ordering on WQ_UNBOUND / max_active == 1 workqueues. Let's automatically set __WQ_ORDERED for those workqueues. Signed-off-by: Tejun Heo <tj@xxxxxxxxxx> Reported-by: Christoph Hellwig <hch@xxxxxxxxxxxxx> Reported-by: Alexei Potashnik <alexei@xxxxxxxxxxxxxxx> Fixes: 4c16bd327c74 ("workqueue: implement NUMA affinity for unbound workqueues") Signed-off-by: Greg Kroah-Hartman <gregkh@xxxxxxxxxxxxxxxxxxx> --- kernel/workqueue.c | 10 ++++++++++ 1 file changed, 10 insertions(+) --- a/kernel/workqueue.c +++ b/kernel/workqueue.c @@ -3915,6 +3915,16 @@ struct workqueue_struct *__alloc_workque struct workqueue_struct *wq; struct pool_workqueue *pwq; + /* + * Unbound && max_active == 1 used to imply ordered, which is no + * longer the case on NUMA machines due to per-node pools. While + * alloc_ordered_workqueue() is the right way to create an ordered + * workqueue, keep the previous behavior to avoid subtle breakages + * on NUMA. + */ + if ((flags & WQ_UNBOUND) && max_active == 1) + flags |= __WQ_ORDERED; + /* see the comment above the definition of WQ_POWER_EFFICIENT */ if ((flags & WQ_POWER_EFFICIENT) && wq_power_efficient) flags |= WQ_UNBOUND; Patches currently in stable-queue which might be from tj@xxxxxxxxxx are queue-4.9/cgroup-create-dfl_root-files-on-subsys-registration.patch queue-4.9/cgroup-fix-error-return-value-from-cgroup_subtree_control.patch queue-4.9/workqueue-restore-wq_unbound-max_active-1-to-be-ordered.patch queue-4.9/libata-array-underflow-in-ata_find_dev.patch