On 2/5/21 10:17 AM, Michael S. Tsirkin wrote: > On Thu, Feb 04, 2021 at 05:35:07AM -0600, Mike Christie wrote: >> @@ -1132,14 +1127,8 @@ vhost_scsi_handle_vq(struct vhost_scsi *vs, struct vhost_virtqueue *vq) >> * vhost_scsi_queue_data_in() and vhost_scsi_queue_status() >> */ >> cmd->tvc_vq_desc = vc.head; >> - /* >> - * Dispatch cmd descriptor for cmwq execution in process >> - * context provided by vhost_scsi_workqueue. This also ensures >> - * cmd is executed on the same kworker CPU as this vhost >> - * thread to gain positive L2 cache locality effects. >> - */ >> - INIT_WORK(&cmd->work, vhost_scsi_submission_work); >> - queue_work(vhost_scsi_workqueue, &cmd->work); >> + target_queue_cmd_submit(tpg->tpg_nexus->tvn_se_sess, >> + &cmd->tvc_se_cmd); >> ret = 0; >> err: >> /* > > What about this aspect? Will things still stay on the same CPU Yes, if that is what it's configured to do. On the submission path there is no change in behavior. target_queue_cmd_submit does queue_work_on so it executes the cmd on the same CPU in LIO. Once LIO passes it to the block layer then that layer does whatever is setup. On the completion path the low level works the same. The low level driver goes by its ISRs/softirq/completion-thread settings, the block layer then goes by the queue settings like rq_affinity. The change in behavior is that in LIO we will do what was configured in the layer below us instead of always trying to complete on the same CPU it was submitted on.