On 2/15/23 4:38?PM, Jens Axboe wrote: > On 2/14/23 7:48?AM, Pankaj Raghav wrote: >> Hi Ming, >> >> On 2023-02-13 13:40, Ming Lei wrote: >>>>> >>>>> Can you share perf data on other non-io_uring engine often used? The >>>>> thing is that we still have lots of non-io_uring workloads, which can't >>>>> be hurt now. >>>>> >>>> Sounds good. Does psync and libaio along with io_uring suffice? >>> >>> Yeah, it should be enough. >>> >> >> Performance regression is noticed for libaio and psync. I did the same >> tests on null_blk with bio and blk-mq backends, and noticed a similar pattern. >> >> Should we add a module parameter to switch between bio and blk-mq back-end >> in brd, similar to null_blk? The default option would be bio to avoid >> regression on existing workloads. >> >> There is a clear performance gain for some workloads with blk-mq support in >> brd. Let me know your thoughts. See below the performance results. >> >> Results for brd with --direct enabled: > > I think your numbers are skewed because brd isn't flagg nowait, can you > try with this? > > I ran some quick testing here, using the current tree: > > without patch with patch > io_uring ~430K IOPS ~3.4M IOPS > libaio ~895K IOPS ~895K IOPS > > which is a pretty substantial difference... And here's the actual patch. FWIW, this doesn't make a difference for libaio, because aio doesn't really care if it blocks or not. diff --git a/drivers/block/brd.c b/drivers/block/brd.c index 20acc4a1fd6d..82419e345777 100644 --- a/drivers/block/brd.c +++ b/drivers/block/brd.c @@ -412,6 +412,7 @@ static int brd_alloc(int i) /* Tell the block layer that this is not a rotational device */ blk_queue_flag_set(QUEUE_FLAG_NONROT, disk->queue); blk_queue_flag_clear(QUEUE_FLAG_ADD_RANDOM, disk->queue); + blk_queue_flag_set(QUEUE_FLAG_NOWAIT, disk->queue); err = add_disk(disk); if (err) goto out_cleanup_disk; -- Jens Axboe