On 4/25/21 1:57 AM, Ming Lei wrote: > However, still one request UAF not covered: refcount_inc_not_zero() may > read one freed request, and it will be handled in next patch. This means that patch "blk-mq: clear stale request in tags->rq[] before freeing one request pool" should come before this patch. > @@ -276,12 +277,15 @@ static bool bt_tags_iter(struct sbitmap *bitmap, unsigned int bitnr, void *data) > rq = tags->static_rqs[bitnr]; > else > rq = tags->rqs[bitnr]; > - if (!rq) > + if (!rq || !refcount_inc_not_zero(&rq->ref)) > return true; > if ((iter_data->flags & BT_TAG_ITER_STARTED) && > !blk_mq_request_started(rq)) > - return true; > - return iter_data->fn(rq, iter_data->data, reserved); > + ret = true; > + else > + ret = iter_data->fn(rq, iter_data->data, reserved); > + blk_mq_put_rq_ref(rq); > + return ret; > } Even if patches 7/8 and 8/8 would be reordered, the above code introduces a new use-after-free, a use-after-free that is much worse than the UAF in kernel v5.11. The following sequence can be triggered by the above code: * bt_tags_iter() reads tags->rqs[bitnr] and stores the request pointer in the 'rq' variable. * Request 'rq' completes, tags->rqs[bitnr] is cleared and the memory that backs that request is freed. * The memory that backs 'rq' is used for another purpose and the request reference count becomes nonzero. * bt_tags_iter() increments the request reference count and thereby corrupts memory. Bart.