Re: [PATCH 1/1] Set prev_flush_start and flush_bio in an atomic way

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




> On Jan 12, 2021, at 1:24 AM, Xiao Ni <xni@xxxxxxxxxx> wrote:
> 
> Hi Song
> 
> Could you review this patch
> 

I am sorry for the delay. Just applied it to md-fixes. 

I made some changes to it
 1. add md: prefix to the commit log subject;
 2. Adjust the width of the commit log to 75 characters. 

For future tests please run ./scripts/checkpatch.pl on it. 

Thanks,
Song

> Regards
> Xiao
> 
> On 12/10/2020 02:33 PM, Xiao Ni wrote:
>> One customer reports a crash problem which causes by flush request. It triggers a warning
>> before crash.
>> 
>>         /* new request after previous flush is completed */
>>         if (ktime_after(req_start, mddev->prev_flush_start)) {
>>                 WARN_ON(mddev->flush_bio);
>>                 mddev->flush_bio = bio;
>>                 bio = NULL;
>>         }
>> 
>> The WARN_ON is triggered. We use spin lock to protect prev_flush_start and flush_bio
>> in md_flush_request. But there is no lock protection in md_submit_flush_data. It can
>> set flush_bio to NULL first because of compiler reordering write instructions.
>> 
>> For example, flush bio1 sets flush bio to NULL first in md_submit_flush_data. An
>> interrupt or vmware causing an extended stall happen between updating flush_bio and
>> prev_flush_start. Because flush_bio is NULL, flush bio2 can get the lock and submit
>> to underlayer disks. Then flush bio1 updates prev_flush_start after the interrupt or
>> exteneded stall.
>> 
>> Then flush bio3 enters in md_flush_request. The start time req_start is behind
>> prev_flush_start. The flush_bio is not NULL(flush bio2 hasn't finished). So it can
>> trigger the WARN_ON now. Then it calls INIT_WORK again. INIT_WORK() will re-initialize
>> the list pointers in the work_struct, which then can result in a corrupted work list
>> and the work_struct queued a second time. With the work list corrupted, it can lead
>> in invalid work items being used and cause a crash in process_one_work.
>> 
>> We need to make sure only one flush bio can be handled at one same time. So add
>> spin lock in md_submit_flush_data to protect prev_flush_start and flush_bio in
>> an atomic way.
>> 
>> Reviewed-by: David Jeffery <djeffery@xxxxxxxxxx>
>> Signed-off-by: Xiao Ni <xni@xxxxxxxxxx>
>> ---
>>  drivers/md/md.c | 2 ++
>>  1 file changed, 2 insertions(+)
>> 
>> diff --git a/drivers/md/md.c b/drivers/md/md.c
>> index c42af46..2746d5c 100644
>> --- a/drivers/md/md.c
>> +++ b/drivers/md/md.c
>> @@ -639,8 +639,10 @@ static void md_submit_flush_data(struct work_struct *ws)
>>  	 * could wait for this and below md_handle_request could wait for those
>>  	 * bios because of suspend check
>>  	 */
>> +	spin_lock_irq(&mddev->lock);
>>  	mddev->prev_flush_start = mddev->start_flush;
>>  	mddev->flush_bio = NULL;
>> +	spin_unlock_irq(&mddev->lock);
>>  	wake_up(&mddev->sb_wait);
>>    	if (bio->bi_iter.bi_size == 0) {
> 






[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux