On 5/26/21 8:33 PM, Damien Le Moal wrote: > On 2021/05/27 10:02, Bart Van Assche wrote: >> For interactive workloads it is important that synchronous requests are >> not delayed. Hence reserve 25% of tags for synchronous requests. This patch > > s/tags/scheduler tags > > to be clear that we are not talking about the device tags. Same in the patch > title may be. OK. >> +static void dd_limit_depth(unsigned int op, struct blk_mq_alloc_data *data) > > Similarly as you did in patch 1, may be add a comment about this operation and > when it is called ? Will do. >> +static void dd_depth_updated(struct blk_mq_hw_ctx *hctx) >> +{ >> + struct request_queue *q = hctx->queue; >> + struct deadline_data *dd = q->elevator->elevator_data; >> + struct blk_mq_tags *tags = hctx->sched_tags; >> + >> + dd->async_depth = 3 * q->nr_requests / 4; > > I think that nr_requests is always at least 2, but it may be good to have a > sanity check here that we do not end up with async_depth == 0, no ? OK, I will add a check. Thanks, Bart.