Re: [PATCH v2 1/1] virtio-blk: add num_io_queues module parameter

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




On 9/12/2021 2:45 PM, Michael S. Tsirkin wrote:
On Sun, Sep 12, 2021 at 01:33:13PM +0300, Max Gurtovoy wrote:
On 9/12/2021 12:50 PM, Michael S. Tsirkin wrote:
On Sun, Sep 12, 2021 at 12:37:26PM +0300, Max Gurtovoy wrote:
On 9/12/2021 12:07 PM, Michael S. Tsirkin wrote:
On Sat, Sep 11, 2021 at 03:56:45PM +0300, Max Gurtovoy wrote:
On 9/10/2021 1:57 AM, Michael S. Tsirkin wrote:
On Thu, Sep 09, 2021 at 07:45:42PM +0300, Max Gurtovoy wrote:
On 9/9/2021 7:31 PM, Michael S. Tsirkin wrote:
On Thu, Sep 09, 2021 at 06:51:56PM +0300, Max Gurtovoy wrote:
On 9/9/2021 6:40 PM, Michael S. Tsirkin wrote:
On Thu, Sep 09, 2021 at 06:37:37PM +0300, Max Gurtovoy wrote:
On 9/9/2021 4:42 PM, Michael S. Tsirkin wrote:
On Mon, Sep 06, 2021 at 02:59:40PM +0300, Max Gurtovoy wrote:
On 9/6/2021 2:20 PM, Michael S. Tsirkin wrote:
On Mon, Sep 06, 2021 at 01:31:32AM +0300, Max Gurtovoy wrote:
On 9/5/2021 7:02 PM, Michael S. Tsirkin wrote:
On Thu, Sep 02, 2021 at 02:45:52PM +0100, Stefan Hajnoczi wrote:
On Tue, Aug 31, 2021 at 04:50:35PM +0300, Max Gurtovoy wrote:
Sometimes a user would like to control the amount of IO queues to be
created for a block device. For example, for limiting the memory
footprint of virtio-blk devices.

Signed-off-by: Max Gurtovoy <mgurtovoy@xxxxxxxxxx>
---

changes from v1:
          - use param_set_uint_minmax (from Christoph)
          - added "Should > 0" to module description

Note: This commit apply on top of Jens's branch for-5.15/drivers
---
          drivers/block/virtio_blk.c | 20 +++++++++++++++++++-
          1 file changed, 19 insertions(+), 1 deletion(-)

diff --git a/drivers/block/virtio_blk.c b/drivers/block/virtio_blk.c
index 4b49df2dfd23..9332fc4e9b31 100644
--- a/drivers/block/virtio_blk.c
+++ b/drivers/block/virtio_blk.c
@@ -24,6 +24,22 @@
          /* The maximum number of sg elements that fit into a virtqueue */
          #define VIRTIO_BLK_MAX_SG_ELEMS 32768
+static int virtblk_queue_count_set(const char *val,
+		const struct kernel_param *kp)
+{
+	return param_set_uint_minmax(val, kp, 1, nr_cpu_ids);
+}
Hmm which tree is this for?
I've mentioned in the note that it apply on branch for-5.15/drivers. But now
you can apply it on linus/master as well.


+
+static const struct kernel_param_ops queue_count_ops = {
+	.set = virtblk_queue_count_set,
+	.get = param_get_uint,
+};
+
+static unsigned int num_io_queues;
+module_param_cb(num_io_queues, &queue_count_ops, &num_io_queues, 0644);
+MODULE_PARM_DESC(num_io_queues,
+		 "Number of IO virt queues to use for blk device. Should > 0");
better:

+MODULE_PARM_DESC(num_io_request_queues,
+                "Limit number of IO request virt queues to use for each device. 0 for now limit");
You proposed it and I replied on it bellow.
Can't say I understand 100% what you are saying. Want to send
a description that does make sense to you but
also reflects reality? 0 is the default so
"should > 0" besides being ungrammatical does not seem t"
reflect what it does ...
if you "modprobe virtio_blk" the previous behavior will happen.

You can't "modprobe virtio_blk num_io_request_queues=0" since the minimal
value is 1.

So your description is not reflecting the code.

We can do:

MODULE_PARM_DESC(num_io_request_queues, "Number of request virt queues to use for blk device. Minimum value is 1 queue");
What's the default value? We should document that.
default value for static global variables is 0.

MODULE_PARM_DESC(num_io_request_queues, "Number of request virt queues to
use for blk device. Minimum value is 1 queue. Default and Maximum value is
equal to the total number of CPUs");
So it says it's the # of cpus but yoiu inspect it with
sysfs and it's actually 0. Let's say that's confusing
at the least. why not just let users set it to 0
and document that?

Setting it by the user to 0 makes no sense.

We can say "if not set, the value equals to the total number of CPUs".
the value is 0. it seems to mean "no limit". the actual # of queues is
then te smaller between # of cpus and # of hardware queues.
I see no reason not to allow user to set that especially if
it was originally the value then user changed it
and is trying to change it back.
I fine with that.

MODULE_PARM_DESC(num_io_request_queues, "Number of request virt queues to use for blk device. 0 value means no limitation");

OK and the second distinction is that it's a limit on the
number, not the actual number. It's never more than what's provided
by the hypervisor.
MODULE_PARM_DESC(num_io_request_queues, "Maximal number of request virt queues to use for blk device. 0 value means no limitation");

is that ok ?
Looks ok. And then do we need to limit this to nr_cpu_ids?
Setting a value that's too high seems harmless ...
why would you want that ?
So one can write a script that keeps working even when hypervisor
changes the # of CPU IDs.

It's also consistent with other parameters, e.g.:

         clocksource.verify_n_cpus= [KNL]
                         Limit the number of CPUs checked for clocksources
                         marked with CLOCK_SOURCE_VERIFY_PERCPU that
                         are marked unstable due to excessive skew.
                         A negative value says to check all CPUs, while
                         zero says not to check any.  Values larger than
                         nr_cpu_ids are silently truncated to nr_cpu_ids.

			^^^^^^^^^^^^

                         The actual CPUs are chosen randomly, with
                         no replacement if the same CPU is chosen twice.

I don't understand how this example is relevant. It's not a blk-mq device that allocate queues for submitting IOs.

There is no sense of creating more queues than the #CPUs in the blk-mq design.

And also no reason setting it to some high value that we'll silently decrease.

Why would anyone write a script with hard coded values of CPUs instead of checking it with some linux tool and then set the module param accordingly ?



The actual value of the created queues can be seen in /sys/block/vda/mq/ as
done today.
+
          static int major;
          static DEFINE_IDA(vd_index_ida);
@@ -501,7 +517,9 @@ static int init_vq(struct virtio_blk *vblk)
          	if (err)
          		num_vqs = 1;
-	num_vqs = min_t(unsigned int, nr_cpu_ids, num_vqs);
+	num_vqs = min_t(unsigned int,
+			min_not_zero(num_io_queues, nr_cpu_ids),
+			num_vqs);
If you respin, please consider calling them request queues. That's the
terminology from the VIRTIO spec and it's nice to keep it consistent.
But the purpose of num_io_queues is clear, so:

Reviewed-by: Stefan Hajnoczi <stefanha@xxxxxxxxxx>
I did this:
+static unsigned int num_io_request_queues;
+module_param_cb(num_io_request_queues, &queue_count_ops, &num_io_request_queues, 0644);
+MODULE_PARM_DESC(num_io_request_queues,
+                "Limit number of IO request virt queues to use for each device. 0 for now limit");
The parameter is writable and can be changed and then new devices might be
probed with new value.

It can't be zero in the code. we can change param_set_uint_minmax args and
say that 0 says nr_cpus.

I'm ok with the renaming but I prefer to stick to the description we gave in
V3 of this patch (and maybe enable value of 0 as mentioned above).



[Index of Archives]     [Linux RAID]     [Linux SCSI]     [Linux ATA RAID]     [IDE]     [Linux Wireless]     [Linux Kernel]     [ATH6KL]     [Linux Bluetooth]     [Linux Netdev]     [Kernel Newbies]     [Security]     [Git]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Device Mapper]

  Powered by Linux