Bart Van Assche <bvanassche@xxxxxxx> 于2024年5月23日周四 02:12写道: > > On 5/22/24 10:57, Jens Axboe wrote: > > On 5/22/24 11:38 AM, Bart Van Assche wrote: > >> On 5/22/24 00:48, Yunlong Xing wrote: > >>> @@ -1913,6 +1921,10 @@ static void loop_handle_cmd(struct loop_cmd *cmd) > >>> set_active_memcg(old_memcg); > >>> css_put(cmd_memcg_css); > >>> } > >>> + > >>> + if (ori_ioprio != cmd_ioprio) > >>> + set_task_ioprio(current, ori_ioprio); > >>> + > >>> failed: > >>> /* complete non-aio request */ > >>> if (!use_aio || ret) { > >> > >> Does adding this call in the hot path have a measurable performance impact? > > > > It's loop, I would not be concerned with overhead. But it does look pretty > > bogus to modify the task ioprio from here. > > Hi Jens, > > Maybe Yunlong uses that call to pass the I/O priority to the I/O submitter? > > I think that it is easy to pass the I/O priority to the kiocb submitted by > lo_rw_aio() without calling set_task_ioprio(). > > lo_read_simple() and lo_write_simple() however call vfs_iter_read() / > vfs_iter_write(). This results in a call of do_iter_readv_writev() and > init_sync_kiocb(). The latter function calls get_current_ioprio(). This is > probably why the set_task_ioprio() call has been added? Yeah that's why I call set_task_ioprio. I want to the loop kwoker task,submit I/O to the real disk device,can pass the iopriority of the loop device request, both lo_rw_aio() and lo_read_simple()/lo_write_simple(). Thanks, yunlong. > > Thanks, > > Bart. > >