On 2021/1/20 上午9:51, JeffleXu wrote:
On 1/19/21 12:06 PM, Jason Wang wrote:
On 2021/1/19 上午9:33, JeffleXu wrote:
On 1/18/21 1:25 PM, Jason Wang wrote:
On 2021/1/18 上午11:58, Joseph Qi wrote:
module parameter 'virtblk_queue_depth' was firstly introduced for
testing/benchmarking purposes described in commit fc4324b4597c
("virtio-blk: base queue-depth on virtqueue ringsize or module param").
Since we have different virtio-blk devices which have different
capabilities, it requires that we support per-device queue depth
instead
of per-module. So defaultly use vq free elements if module parameter
'virtblk_queue_depth' is not set.
I wonder if it's better to use sysfs instead (or whether it has already
had something like this in the blocker layer).
"/sys/block/<dev>/queue/nr_requests" indeed works, but isn't better to
set queue_depth according to the hardware capability at the very first?
AFAIK, nvme just set per-device queue_depth at initializing phase.
I agree, the problem is that the current code may modify module parameter.
The module parameter 'virtblk_queue_depth' is actually remained untainted.
Actually it is the original code before this patch that changes the
module parameter.
Yes, that's what I meant.
When the module parameter is not set by boot cmdline
(i.e., default to 0), it will be initialized to the queue_depth of the
vring of the first probed virtio-blk device, and will be revealed to
user space through '/sys/module/virtio_blk/parameters/queue_depth'. I'm
not sure if this behavior is reasonable or not.
Right, it means the virtio-blk devices that is probed after the first
one can only use the queue_depth that is set according to the capability
of the first virtio-blk device.
The only side effect of this patch is that, now
'/sys/module/virtio_blk/parameters/queue_depth' will be kept as '0' when
the module parameter is not set manually.
I think it's not an issue, the nr_request should be the correct way to
get per device queue depth.
Thanks
Thanks,
Jeffle
Signed-off-by: Joseph Qi <joseph.qi@xxxxxxxxxxxxxxxxx>
---
drivers/block/virtio_blk.c | 12 +++++++-----
1 file changed, 7 insertions(+), 5 deletions(-)
diff --git a/drivers/block/virtio_blk.c b/drivers/block/virtio_blk.c
index 145606d..f83a417 100644
--- a/drivers/block/virtio_blk.c
+++ b/drivers/block/virtio_blk.c
@@ -705,6 +705,7 @@ static int virtblk_probe(struct virtio_device
*vdev)
u32 v, blk_size, max_size, sg_elems, opt_io_size;
u16 min_io_size;
u8 physical_block_exp, alignment_offset;
+ unsigned int queue_depth;
if (!vdev->config->get) {
dev_err(&vdev->dev, "%s failure: config access disabled\n",
@@ -755,17 +756,18 @@ static int virtblk_probe(struct virtio_device
*vdev)
goto out_free_vq;
}
- /* Default queue sizing is to fill the ring. */
- if (!virtblk_queue_depth) {
- virtblk_queue_depth = vblk->vqs[0].vq->num_free;
+ if (likely(!virtblk_queue_depth)) {
+ queue_depth = vblk->vqs[0].vq->num_free;
/* ... but without indirect descs, we use 2 descs per req */
if (!virtio_has_feature(vdev, VIRTIO_RING_F_INDIRECT_DESC))
- virtblk_queue_depth /= 2;
+ queue_depth /= 2;
+ } else {
+ queue_depth = virtblk_queue_depth;
}
memset(&vblk->tag_set, 0, sizeof(vblk->tag_set));
vblk->tag_set.ops = &virtio_mq_ops;
- vblk->tag_set.queue_depth = virtblk_queue_depth;
+ vblk->tag_set.queue_depth = queue_depth;
vblk->tag_set.numa_node = NUMA_NO_NODE;
vblk->tag_set.flags = BLK_MQ_F_SHOULD_MERGE;
vblk->tag_set.cmd_size =
_______________________________________________
Virtualization mailing list
Virtualization@xxxxxxxxxxxxxxxxxxxxxxxxxx
https://lists.linuxfoundation.org/mailman/listinfo/virtualization