On 10/31/22 4:12 PM, Al Viro wrote: > static void blk_add_rq_to_plug(struct blk_plug *plug, struct request *rq) > { > struct request *last = rq_list_peek(&plug->mq_list); > > Suppose it's not NULL... > > if (!plug->rq_count) { > trace_block_plug(rq->q); > } else if (plug->rq_count >= blk_plug_max_rq_count(plug) || > (!blk_queue_nomerges(rq->q) && > blk_rq_bytes(last) >= BLK_PLUG_FLUSH_SIZE)) { > ... and we went here: > blk_mq_flush_plug_list(plug, false); > All requests, including the one last points to, might get fed ->queue_rq() > here. At which point there seems to be nothing to prevent them getting > completed and freed on another CPU, possibly before we return here. > > trace_block_plug(rq->q); > } > > if (!plug->multiple_queues && last && last->q != rq->q) > ... and here we dereference last. > > Shouldn't we reset last to NULL after the call of blk_mq_flush_plug_list() > above? There's no UAF here as the requests aren't freed. We could clear 'last' to make the code more explicit, and that would avoid any potential suboptimal behavior with ->multiple_queues being wrong. -- Jens Axboe