Re: [PATCH] fs,xfs: fix missed wakeup on l_flush_wait

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 7 May 2019, at 17:22, Dave Chinner wrote:

> On Tue, May 07, 2019 at 01:05:28PM -0400, Rik van Riel wrote:
>> The code in xlog_wait uses the spinlock to make adding the task to
>> the wait queue, and setting the task state to UNINTERRUPTIBLE atomic
>> with respect to the waker.
>>
>> Doing the wakeup after releasing the spinlock opens up the following
>> race condition:
>>
>> - add task to wait queue
>>
>> -                                      wake up task
>>
>> - set task state to UNINTERRUPTIBLE
>>
>> Simply moving the spin_unlock to after the wake_up_all results
>> in the waker not being able to see a task on the waitqueue before
>> it has set its state to UNINTERRUPTIBLE.
>
> Yup, seems like an issue. Good find, Rik.
>
> So, what problem is this actually fixing? Was it noticed by
> inspection, or is it actually manifesting on production machines?
> If it is manifesting IRL, what are the symptoms (e.g. hang running
> out of log space?) and do you have a test case or any way to
> exercise it easily?

The steps to reproduce are semi-complicated, they create a bunch of 
files, do stuff, and then delete all the files in a loop.  I think they 
shotgunned it across 500 or so machines to trigger 5 times, and then 
left the wreckage for us to poke at.

The symptoms were identical to the bug fixed here:

commit 696a562072e3c14bcd13ae5acc19cdf27679e865
Author: Brian Foster <bfoster@xxxxxxxxxx>
Date:   Tue Mar 28 14:51:44 2017 -0700

xfs: use dedicated log worker wq to avoid deadlock with cil wq

But since our 4.16 kernel is new than that, I briefly hoped that 
m_sync_workqueue needed to be flagged with WQ_MEM_RECLAIM.  I don't have 
a great picture of how all of these workqueues interact, but I do think 
it needs WQ_MEM_RECLAIM.  It can't be the cause of this deadlock, the 
workqueue watchdog would have fired.

Rik mentioned that I found sleeping procs with an empty iclog waitqueue 
list, which is when he noticed this race.  We sent a wakeup to the 
sleeping process, and ftrace showed the process looping back around to 
sleep on the iclog again.  Long story short, Rik's patch definitely 
wouldn't have prevented the deadlock, and the iclog waitqueue I was 
poking must not have been the same one that process was sleeping on.

The actual problem ended up being the blkmq IO schedulers sitting on a 
request.  Switching schedulers makes the box come back to life, so it's 
either a kyber bug or slightly higher up in blkmqland.

That's a huge tangent around acking Rik's patch, but it's hard to be 
sure if we've hit the lost wakeup in prod.  I could search through all 
the related hung task timeouts, but they are probably all stuck in 
blkmq.

Acked-but-I'm-still-blaming-Jens-by: Chris Mason <clm@xxxxxx>

-chris




[Index of Archives]     [XFS Filesystem Development (older mail)]     [Linux Filesystem Development]     [Linux Audio Users]     [Yosemite Trails]     [Linux Kernel]     [Linux RAID]     [Linux SCSI]


  Powered by Linux