On Sat, Mar 07, 2020 at 09:52:21PM -0800, Eric Biggers wrote: > From: Eric Biggers <ebiggers@xxxxxxxxxx> > > When a thread loses the workqueue allocation race in > sb_init_dio_done_wq(), lockdep reports that the call to > destroy_workqueue() can deadlock waiting for work to complete. This is > a false positive since the workqueue is empty. But we shouldn't simply > skip the lockdep check for empty workqueues for everyone. Why not? If the wq is empty, it can't deadlock, so this is a problem with the workqueue lockdep annotations, not a problem with code that is destroying an empty workqueue. > Just avoid this issue by using a mutex to serialize the workqueue > allocation. We still keep the preliminary check for ->s_dio_done_wq, so > this doesn't affect direct I/O performance. > > Also fix the preliminary check for ->s_dio_done_wq to use READ_ONCE(), > since it's a data race. (That part wasn't actually found by syzbot yet, > but it could be detected by KCSAN in the future.) > > Note: the lockdep false positive could alternatively be fixed by > introducing a new function like "destroy_unused_workqueue()" to the > workqueue API as previously suggested. But I think it makes sense to > avoid the double allocation anyway. Fix the infrastructure, don't work around it be placing constraints on how the callers can use the infrastructure to work around problems internal to the infrastructure. Cheers, Dave. -- Dave Chinner david@xxxxxxxxxxxxx