On Tue, Aug 17, 2021 at 10:23:06AM +0800, Yu Kuai wrote: > If ioscheduler is not none, hctx->tags->rq[tag] will point to > hctx->sched_tags->static_rq[internel_tag] in blk_mq_get_driver_tag(). > However, static_rq of sched_tags might be freed through switching > elevator or increasing nr_requests. Thus leave a window for some drivers > to get the freed request through blk_mq_tag_to_rq(tags, tag). I believe I have explained that it is bug of driver which has knowledge if the passed tag is valid or not. We are clear that driver need to cover race between normal completion and timeout/error handling. > > It's difficult to fix this uaf from driver side, I'm thinking about So far not see any analysis on why the uaf is triggered, care to investigate the reason? > following solution: > > a. clear rq mapping in driver tags before freeing rqs in sched tags We have done that already, see blk_mq_free_rqs(). > b. provide a new interface to replace blk_mq_tag_to_rq(), the new > interface will make sure it won't return freed rq. b) in your previous patch can't avoid uaf: https://lore.kernel.org/linux-block/YRHa%2FkeJ4pHP3hnL@T590/ > > Signed-off-by: Yu Kuai <yukuai3@xxxxxxxxxx> > --- > block/blk-mq-sched.c | 10 +++++++++- > block/blk-mq.c | 13 +++++++++++-- > block/blk-mq.h | 2 ++ > 3 files changed, 22 insertions(+), 3 deletions(-) > > diff --git a/block/blk-mq-sched.c b/block/blk-mq-sched.c > index 0f006cabfd91..9f11f17b8380 100644 > --- a/block/blk-mq-sched.c > +++ b/block/blk-mq-sched.c > @@ -662,8 +662,16 @@ void blk_mq_sched_free_requests(struct request_queue *q) > int i; > > queue_for_each_hw_ctx(q, hctx, i) { > - if (hctx->sched_tags) > + if (hctx->sched_tags) { > + /* > + * We are about to free requests in 'sched_tags[]', > + * however, 'tags[]' may still point to these requests. > + * Thus we need to clear rq mapping in 'tags[]' before > + * freeing requests in sched_tags[]. > + */ > + blk_mq_clear_rq_mapping(q->tag_set, hctx->tags, i); > blk_mq_free_rqs(q->tag_set, hctx->sched_tags, i); blk_mq_clear_rq_mapping() has been called in blk_mq_free_rqs() for clearing the request reference. In theory, we only need to clear it in case of real io sched, but so far we do it for both io sched and none. > + } > } > } > > diff --git a/block/blk-mq.c b/block/blk-mq.c > index d185be64c85f..b1e30464f87f 100644 > --- a/block/blk-mq.c > +++ b/block/blk-mq.c > @@ -2314,8 +2314,8 @@ static size_t order_to_size(unsigned int order) > } > > /* called before freeing request pool in @tags */ > -static void blk_mq_clear_rq_mapping(struct blk_mq_tag_set *set, > - struct blk_mq_tags *tags, unsigned int hctx_idx) > +void blk_mq_clear_rq_mapping(struct blk_mq_tag_set *set, > + struct blk_mq_tags *tags, unsigned int hctx_idx) > { > struct blk_mq_tags *drv_tags = set->tags[hctx_idx]; > struct page *page; > @@ -3632,6 +3632,15 @@ int blk_mq_update_nr_requests(struct request_queue *q, unsigned int nr) > if (!ret && blk_mq_is_sbitmap_shared(set->flags)) > blk_mq_tag_resize_shared_sbitmap(set, nr); > } else { > + /* > + * We are about to free requests in 'sched_tags[]', > + * however, 'tags[]' may still point to these requests. > + * Thus we need to clear rq mapping in 'tags[]' before > + * freeing requests in sched_tags[]. > + */ > + if (nr > hctx->sched_tags->nr_tags) > + blk_mq_clear_rq_mapping(set, hctx->tags, i); > + > ret = blk_mq_tag_update_depth(hctx, &hctx->sched_tags, > nr, true); The request reference has been cleared too in blk_mq_tag_update_depth(): blk_mq_tag_update_depth blk_mq_free_rqs blk_mq_clear_rq_mapping Thanks, Ming