On 2020-08-28 00:19, Daejun Park wrote: > +static unsigned int ufshpb_host_map_kbytes = 1024; A comment that explains where this value comes from would be welcome. > +static struct ufshpb_req *ufshpb_get_map_req(struct ufshpb_lu *hpb, > + struct ufshpb_subregion *srgn) > +{ > + struct ufshpb_req *map_req; > + struct request *req; > + struct bio *bio; > + > + map_req = kmem_cache_alloc(hpb->map_req_cache, GFP_KERNEL); > + if (!map_req) > + return NULL; > + > + req = blk_get_request(hpb->sdev_ufs_lu->request_queue, > + REQ_OP_SCSI_IN, BLK_MQ_REQ_PREEMPT); Why BLK_MQ_REQ_PREEMPT? Since this code is only executed while medium access commands are processed and since none of these commands have the PREEMPT flag set I think that the PREEMPT flag should be left out. Otherwise there probably will be weird interactions with runtime suspended devices. Is it acceptable that the above blk_get_request() call blocks if a UFS device has been runtime suspended? If not, consider using the flag BLK_MQ_REQ_NOWAIT instead. > + bio = bio_alloc(GFP_KERNEL, hpb->pages_per_srgn); > + if (!bio) { > + blk_put_request(req); > + goto free_map_req; > + } If the blk_get_request() would be modified such that it doesn't wait, this call may have to be modified too (GFP_NOWAIT?). > + if (rgn->rgn_state == HPB_RGN_INACTIVE) { > + if (atomic_read(&lru_info->active_cnt) > + == lru_info->max_lru_active_cnt) { When splitting a line, please put comparison operators at the end of the line instead of at the start, e.g. as follows: if (atomic_read(&lru_info->active_cnt) == lru_info->max_lru_active_cnt) { > + pool_size = DIV_ROUND_UP(ufshpb_host_map_kbytes * 1024, PAGE_SIZE); Please use PAGE_ALIGN() to align to the next page boundary. Thanks, Bart.