On Tue, Jan 02, 2018 at 08:16:56AM -0800, Tejun Heo wrote: > Hello, > > On Fri, Dec 29, 2017 at 02:07:16AM +0530, Prateek Sood wrote: > > task T is waiting for cpuset_mutex acquired > > by kworker/2:1 > > > > sh ==> cpuhp/2 ==> kworker/2:1 ==> sh > > > > kworker/2:3 ==> kthreadd ==> Task T ==> kworker/2:1 > > > > It seems that my earlier patch set should fix this scenario: > > 1) Inverting locking order of cpuset_mutex and cpu_hotplug_lock. > > 2) Make cpuset hotplug work synchronous. > > > > Could you please share your feedback. > > Hmm... this can also be resolved by adding WQ_MEM_RECLAIM to the > synchronize rcu workqueue, right? Given the wide-spread usages of > synchronize_rcu and friends, maybe that's the right solution, or at > least something we also need to do, for this particular deadlock? To make WQ_MEM_RECLAIM work, I need to dynamically allocate RCU's workqueues, correct? Or is there some way to mark a statically allocated workqueue as WQ_MEM_RECLAIM after the fact? I can dynamically allocate them, but I need to carefully investigate boot-time use. So if it is possible to be lazy, I do want to take the easy way out. ;-) Thanx, Paul > Again, I don't have anything against making the domain rebuliding part > of cpuset operations synchronous and these tricky deadlock scenarios > do indicate that doing so would probably be beneficial. That said, > tho, these scenarios seem more of manifestations of other problems > exposed through kthreadd dependency than anything else. > > Thanks. > > -- > tejun > -- To unsubscribe from this list: send the line "unsubscribe cgroups" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html