On 2022/2/17 4:48 下午, Christoph Hellwig wrote: >> { >> struct request_queue *q = rqos->q; >> - const char *dir_name = rq_qos_id_to_name(rqos->id); >> + const char *dir_name; >> + >> + dir_name = rqos->ops->name ? rqos->ops->name : rq_qos_id_to_name(rqos->id); > > Overly long line here. And it would be much more readable if you used > a good old if/else. > >> +static DEFINE_IDA(rq_qos_ida); >> +static int nr_rqos_blkcg_pols; >> +static DEFINE_MUTEX(rq_qos_mutex); >> +static LIST_HEAD(rq_qos_list); > > Please use an allocating xarray instead of an IDA plus list. > >> + /* >> + * queue must have been unregistered here, it is safe to iterate >> + * the list w/o lock >> + */ > > Please capitalize multi-line comments. > >> + * After the pluggable blk-qos, rqos's life cycle become complicated, >> + * as we may modify the rqos list there. Except for the places where >> + * queue is not registered, there are following places may access rqos >> + * list concurrently: > > Code comments are not the place to explain history. PLease explain the > current situation. > >> +struct rq_qos *rq_qos_get(struct request_queue *q, int id) >> +{ >> + struct rq_qos *rqos; >> + >> + spin_lock_irq(&q->queue_lock); > > Please don't use the grab all queue_lock for new code. It badly needs > to be split and documented, and new code is the best place to start > that. > > Also with all the new code please add a new config option that is > selected by all rq-pos implementations so that blk-rq-qos.c only gets > built when actually needed. > >> +static inline struct rq_qos *rq_qos_by_id(struct request_queue *q, int id) >> +{ >> + struct rq_qos *rqos; >> + >> + WARN_ON(!mutex_is_locked(&q->sysfs_lock) && !spin_is_locked(&q->queue_lock)); > > Another overly long line. And in doubt split this into two helpers > so that you cna use lockdep_assert_held instead of doing the incorrect > asserts. Thanks so much for your kindly comment. I'd change the code in next version. Regards Jianchao