On Fri, Oct 21, 2022 at 08:32:31AM -0600, Keith Busch wrote: > On Thu, Oct 20, 2022 at 05:10:13PM +0800, Ming Lei wrote: > > @@ -1593,10 +1598,17 @@ static void blk_mq_timeout_work(struct work_struct *work) > > if (!percpu_ref_tryget(&q->q_usage_counter)) > > return; > > > > - blk_mq_queue_tag_busy_iter(q, blk_mq_check_expired, &next); > > + /* Before walking tags, we must ensure any submit started before the > > + * current time has finished. Since the submit uses srcu or rcu, wait > > + * for a synchronization point to ensure all running submits have > > + * finished > > + */ > > + blk_mq_wait_quiesce_done(q); > > + > > + blk_mq_queue_tag_busy_iter(q, blk_mq_check_expired, &expired); > > The blk_mq_wait_quiesce_done() will only wait for tasks that entered > just before calling that function. It will not wait for tasks that > entered immediately after. Yeah, but the patch records the jiffies before calling blk_mq_wait_quiesce_done, and only time out requests which are timed out before the recorded time, so it is fine to use blk_mq_wait_quiesce_done in this way. > > If I correctly understand the problem you're describing, the hypervisor > may prevent any guest process from running. If so, the timeout work may > be stalled after the quiesce, and if a queue_rq() process also stalled > after starting quiesce_done(), then we're in the same situation you're > trying to prevent, right? No, the stall just happens on one vCPU, and other vCPUs may run smoothly. 1) vmexit, which only stalls one vCPU, some vmexit could come anytime, such as external interrupt 2) vCPU is emulated by pthread usually, and the pthread is just one normal host userspace pthread, which can be preempted anytime, and the preempt latency could be long enough when the system load is heavy. And it is like random stall added when running any instruction of VM kernel code. > > I agree with your idea that this is a lower level driver responsibility: > it should reclaim all started requests before allowing new queuing. > Perhaps the block layer should also raise a clear warning if it's > queueing a request that's already started. The thing is that it is one generic issue, lots of VM drivers could be affected, and it may not be easy for drivers to handle the race too. Thanks, Ming