On Mon, Jan 08, 2018 at 09:06:55PM +0000, Bart Van Assche wrote: > On Mon, 2018-01-08 at 11:15 -0800, Tejun Heo wrote: > > +static void blk_mq_rq_update_aborted_gstate(struct request *rq, u64 gstate) > > +{ > > + unsigned long flags; > > + > > + local_irq_save(flags); > > + u64_stats_update_begin(&rq->aborted_gstate_sync); > > + rq->aborted_gstate = gstate; > > + u64_stats_update_end(&rq->aborted_gstate_sync); > > + local_irq_restore(flags); > > +} > > Please add a comment that explains the purpose of local_irq_save() and > local_irq_restore(). Please also explain why you chose to disable interrupts Will do. > instead of disabling preemption. I think that disabling preemption should be > sufficient since this is the only code that updates rq->aborted_gstate and > since this function is always called from thread context. blk_mq_complete_request() can read it from the irq context. If that happens between update_begin and end, it'll end up looping infinitely. > > @@ -801,6 +840,12 @@ void blk_mq_rq_timed_out(struct request *req, bool reserved) > > __blk_mq_complete_request(req); > > break; > > case BLK_EH_RESET_TIMER: > > + /* > > + * As nothing prevents from completion happening while > > + * ->aborted_gstate is set, this may lead to ignored > > + * completions and further spurious timeouts. > > + */ > > + blk_mq_rq_update_aborted_gstate(req, 0); > > blk_add_timer(req); > > blk_clear_rq_complete(req); > > break; > > Is the race that the comment refers to addressed by one of the later patches? No, but it's not a new race. It has always been there and I suppose doesn't lead to practical problems - the race window is pretty small and the effect isn't critical. I'm just documenting the existing race condition. Will note that in the description. Thanks. -- tejun