This is a note to let you know that I've just added the patch titled workqueue: apply __WQ_ORDERED to create_singlethread_workqueue() to the 3.10-stable tree which can be found at: http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=summary The filename of the patch is: workqueue-apply-__wq_ordered-to-create_singlethread_workqueue.patch and it can be found in the queue-3.10 subdirectory. If you, or anyone else, feels it should not be added to the stable tree, please let <stable@xxxxxxxxxxxxxxx> know about it. >From e09c2c295468476a239d13324ce9042ec4de05eb Mon Sep 17 00:00:00 2001 From: Tejun Heo <tj@xxxxxxxxxx> Date: Sat, 13 Sep 2014 04:14:30 +0900 Subject: workqueue: apply __WQ_ORDERED to create_singlethread_workqueue() From: Tejun Heo <tj@xxxxxxxxxx> commit e09c2c295468476a239d13324ce9042ec4de05eb upstream. create_singlethread_workqueue() is a compat interface for single threaded workqueue which maps to ordered workqueue w/ rescuer in the current implementation. create_singlethread_workqueue() currently implemented by invoking alloc_workqueue() w/ appropriate parameters. 8719dceae2f9 ("workqueue: reject adjusting max_active or applying attrs to ordered workqueues") introduced __WQ_ORDERED to protect ordered workqueues against dynamic attribute changes which can break ordering guarantees but forgot to apply it to create_singlethread_workqueue(). This in itself is okay as nobody currently uses dynamic attribute change on workqueues created with create_singlethread_workqueue(). However, 4c16bd327c ("workqueue: implement NUMA affinity for unbound workqueues") broke singlethreaded guarantee for ordered workqueues through allocating a separate pool_workqueue on each NUMA node by default. A later change 8a2b75384444 ("workqueue: fix ordered workqueues in NUMA setups") fixed it by allocating only one global pool_workqueue if __WQ_ORDERED is set. Combined, the __WQ_ORDERED omission in create_singlethread_workqueue() became critical breaking its single threadedness and ordering guarantee. Let's make create_singlethread_workqueue() wrap alloc_ordered_workqueue() instead so that it inherits __WQ_ORDERED and can implicitly track future ordered_workqueue changes. v2: I missed that __WQ_ORDERED now protects against pwq splitting across NUMA nodes and incorrectly described the patch as a nice-to-have fix to protect against future dynamic attribute usages. Oleg pointed out that this is actually a critical breakage due to 8a2b75384444 ("workqueue: fix ordered workqueues in NUMA setups"). Signed-off-by: Tejun Heo <tj@xxxxxxxxxx> Reported-by: Mike Anderson <mike.anderson@xxxxxxxxxx> Cc: Oleg Nesterov <onestero@xxxxxxxxxx> Cc: Gustavo Luiz Duarte <gduarte@xxxxxxxxxx> Cc: Tomas Henzl <thenzl@xxxxxxxxxx> Fixes: 4c16bd327c ("workqueue: implement NUMA affinity for unbound workqueues") Signed-off-by: Greg Kroah-Hartman <gregkh@xxxxxxxxxxxxxxxxxxx> --- include/linux/workqueue.h | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) --- a/include/linux/workqueue.h +++ b/include/linux/workqueue.h @@ -414,7 +414,7 @@ __alloc_workqueue_key(const char *fmt, u #define create_freezable_workqueue(name) \ alloc_workqueue((name), WQ_FREEZABLE | WQ_UNBOUND | WQ_MEM_RECLAIM, 1) #define create_singlethread_workqueue(name) \ - alloc_workqueue((name), WQ_UNBOUND | WQ_MEM_RECLAIM, 1) + alloc_ordered_workqueue("%s", WQ_MEM_RECLAIM, name) extern void destroy_workqueue(struct workqueue_struct *wq); Patches currently in stable-queue which might be from tj@xxxxxxxxxx are queue-3.10/ahci-add-device-ids-for-intel-9-series-pch.patch queue-3.10/ahci-add-pcid-for-marvel-0x9182-controller.patch queue-3.10/pata_scc-propagate-return-value-of-scc_wait_after_reset.patch queue-3.10/workqueue-apply-__wq_ordered-to-create_singlethread_workqueue.patch queue-3.10/cfq-iosched-fix-wrong-children_weight-calculation.patch -- To unsubscribe from this list: send the line "unsubscribe stable" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html