Re: [bug report] worker watchdog timeout in dispatch loop for null_blk

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Mar 11, 2022 / 17:51, Ming Lei wrote:
> On Fri, Mar 11, 2022 at 06:24:41AM +0000, Shinichiro Kawasaki wrote:
> > On Mar 10, 2022 / 05:47, Jens Axboe wrote:
> > > On 3/10/22 5:40 AM, Shinichiro Kawasaki wrote:
> > > > On Mar 10, 2022 / 18:00, Ming Lei wrote:
> > > >> On Thu, Mar 10, 2022 at 09:16:50AM +0000, Shinichiro Kawasaki wrote:
> > > >>> This issue does not look critical, but let me share it to ask comments for fix.
> > > >>>
> > > >>> When fio command with 40 jobs [1] is run for a null_blk device with memory
> > > >>> backing and mq-deadline scheduler, kernel reports a BUG message [2]. The
> > > >>> workqueue watchdog reports that kblockd blk_mq_run_work_fn keeps on running
> > > >>> more than 30 seconds and other work can not run. The 40 fio jobs keep on
> > > >>> creating many read requests to a single null_blk device, then the every time
> > > >>> the mq_run task calls __blk_mq_do_dispatch_sched(), it returns ret == 1 which
> > > >>> means more than one request was dispatched. Hence, the while loop in
> > > >>> blk_mq_do_dispatch_sched() does not break.
> > > >>>
> > > >>> static int blk_mq_do_dispatch_sched(struct blk_mq_hw_ctx *hctx)
> > > >>> {
> > > >>>         int ret;
> > > >>>
> > > >>>         do {
> > > >>>                ret = __blk_mq_do_dispatch_sched(hctx);
> > > >>>         } while (ret == 1);
> > > >>>
> > > >>>         return ret;
> > > >>> }
> > > >>>
> > > >>> The BUG message was observed when I ran blktests block/005 with various
> > > >>> conditions on a system with 40 CPUs. It was observed with kernel version
> > > >>> v5.16-rc1 through v5.17-rc7. The trigger commit was 0a593fbbc245 ("null_blk:
> > > >>> poll queue support"). This commit added blk_mq_ops.map_queues callback. I
> > > >>> guess it changed dispatch behavior for null_blk devices and triggered the
> > > >>> BUG message.
> > > >>
> > > >> It is one blk-mq soft lockup issue in dispatch side, and shouldn't be related
> > > >> with 0a593fbbc245.
> > > >>
> > > >> If queueing requests is faster than dispatching, the issue will be triggered
> > > >> sooner or later, especially easy to trigger in SQ device. I am sure it can
> > > >> be triggered on scsi debug, even saw such report on ahci.
> > > > 
> > > > Thank you for the comments. Then this is the real problem.
> > > > 
> > > >>
> > > >>>
> > > >>> I'm not so sure if we really need to fix this issue. It does not seem the real
> > > >>> world problem since it is observed only with null_blk. The real block devices
> > > >>> have slower IO operation then the dispatch should stop sooner when the hardware
> > > >>> queue gets full. Also the 40 jobs for single device is not realistic workload.
> > > >>>
> > > >>> Having said that, it does not feel right that other works are pended during
> > > >>> dispatch for null_blk devices. To avoid the BUG message, I can think of two
> > > >>> fix approaches. First one is to break the while loop in blk_mq_do_dispatch_sched
> > > >>> using a loop counter [3] (or jiffies timeout check).
> > > >>
> > > >> This way could work, but the queue need to be re-run after breaking
> > > >> caused by max dispatch number. cond_resched() might be the simplest way,
> > > >> but it can't be used here because of rcu/srcu read lock.
> > > > 
> > > > As far as I understand, blk_mq_run_work_fn() should return after the loop break
> > > > to yield the worker to other works. How about to call
> > > > blk_mq_delay_run_hw_queue() at the loop break? Does this re-run the dispatch?
> > > > 
> > > > 
> > > > diff --git a/block/blk-mq-sched.c b/block/blk-mq-sched.c
> > > > index 55488ba978232..faa29448a72a0 100644
> > > > --- a/block/blk-mq-sched.c
> > > > +++ b/block/blk-mq-sched.c
> > > > @@ -178,13 +178,19 @@ static int __blk_mq_do_dispatch_sched(struct blk_mq_hw_ctx *hctx)
> > > >  	return !!dispatched;
> > > >  }
> > > >  
> > > > +#define MQ_DISPATCH_MAX 0x10000
> > > > +
> > > >  static int blk_mq_do_dispatch_sched(struct blk_mq_hw_ctx *hctx)
> > > >  {
> > > >  	int ret;
> > > > +	unsigned int count = MQ_DISPATCH_MAX;
> > > >  
> > > >  	do {
> > > >  		ret = __blk_mq_do_dispatch_sched(hctx);
> > > > -	} while (ret == 1);
> > > > +	} while (ret == 1 && count--);
> > > > +
> > > > +	if (ret == 1 && !count)
> > > > +		blk_mq_delay_run_hw_queue(hctx, 0);
> > > >  
> > > >  	return ret;
> > > >  }
> > > 
> > > Why not just gate it on needing to reschedule, rather than some random
> > > value?
> > > 
> > > static int blk_mq_do_dispatch_sched(struct blk_mq_hw_ctx *hctx)
> > > {
> > > 	int ret;
> > > 
> > > 	do {
> > > 		ret = __blk_mq_do_dispatch_sched(hctx);
> > > 	} while (ret == 1 && !need_resched());
> > > 
> > > 	if (ret == 1 && need_resched())
> > > 		blk_mq_delay_run_hw_queue(hctx, 0);
> > > 
> > > 	return ret;
> > > }
> > > 
> > > or something like that.
> > 
> > Jens, thanks for the idea, but need_resched() check does not look working here.
> > I tried the code above but still the BUG message is observed. My guess is that
> > in the call stack below, every __blk_mq_do_dispatch_sched() call results in
> > might_sleep_if() call, then need_resched() does not work as expected, probably.
> > 
> > __blk_mq_do_dispatch_sched
> >   blk_mq_dispatch_rq_list
> >     q->mq_ops->queue_rq
> >       null_queue_rq
> >         might_sleep_if
> > 
> > Now I'm trying to find similar way as need_resched() to avoid the random number.
> > So far I haven't found good idea yet.
> 
> Jens patch using need_resched() looks improving the situation, also the
> scsi_debug case won't set BLOCKING:
> 
> 1) without the patch, it can be easy to trigger lockup with the
> following test.
> 
> - modprobe scsi_debug virtual_gb=128 delay=0 dev_size_mb=2048
> - fio --bs=512k --ioengine=sync --iodepth=128 --numjobs=4 --rw=randrw \
> 	--name=sdc-sync-randrw-512k --filename=/dev/sdc --direct=1 --size=60G --runtime=120
> 
> #sdc is the created scsi_debug disk

Thanks. I tried the work load above and observed the lockup BUG message on my
system. So, I reconfirmed that the problem happens with both BLOCKING and
non-BLOCKING drivers.

Regarding the solution, I can not think of any good one. I tried to remove the
WQ_HIGHPRI flag from kblockd_workqueue, but it did not look affecting
need_resched() behavior. I walked through workqueue API, but was not able
to find anything useful.

As far as I understand, it is assumed and expected the each work item gets
completed within decent time. Then this blk_mq_run_work_fn must stop within
decent time by breaking the loop at some point. As the loop break conditions
other than need_resched(), I can think of 1) loop count, 2) number of requests
dispatched or 3) time spent in the loop. All of the three require a magic random
number as the limit... Is there any other better way?

If we need to choose one of the 3 options, I think '3) time spent in the loop'
is better than others, since workqueue watchdog monitors _time_ to check lockup
and report the BUG message.

-- 
Best Regards,
Shin'ichiro Kawasaki



[Index of Archives]     [Linux RAID]     [Linux SCSI]     [Linux ATA RAID]     [IDE]     [Linux Wireless]     [Linux Kernel]     [ATH6KL]     [Linux Bluetooth]     [Linux Netdev]     [Kernel Newbies]     [Security]     [Git]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Device Mapper]

  Powered by Linux