On 8/30/2022 10:22 PM, Eli Cohen wrote:
From: Si-Wei Liu <si-wei.liu@xxxxxxxxxx>
Sent: Wednesday, August 31, 2022 2:23 AM
To: Michael S. Tsirkin <mst@xxxxxxxxxx>
Cc: Eli Cohen <elic@xxxxxxxxxx>; virtualization@xxxxxxxxxxxxxxxxxxxxxxxxxx;
Jason Wang <jasowang@xxxxxxxxxx>; eperezma@xxxxxxxxxx
Subject: Re: RFC: control virtqueue size by the vdpa tool
On 8/30/2022 3:01 PM, Michael S. Tsirkin wrote:
On Tue, Aug 30, 2022 at 02:04:55PM -0700, Si-Wei Liu wrote:
On 8/30/2022 12:58 PM, Michael S. Tsirkin wrote:
On Tue, Aug 30, 2022 at 06:22:31AM +0000, Eli Cohen wrote:
Hi,
I have been experimenting with different queue sizes with mlx5_vdpa
and noticed
that queue size can affect performance.
I would like to propose an extension to vdpa tool to allow to specify
the queue
size. Valid values will conform to the max of 32768 specified by the
spec.
“vdpa mgmtdev show” will have another line specifying the valid range
for a
management device which could be narrower than the spec allows.
This range will
be valid for data queues only (not for control VQ).
Another line will display the default queue size
Example:
$ vdpa mgmtdev show
auxiliary/mlx5_core.sf.6:
supported_classes net
max_supported_vqs 65
dev_features CSUM GUEST_CSUM MTU HOST_TSO4 HOST_TSO6
STATUS CTRL_VQ CTRL_VLAN
MQ CTRL_MAC_ADDR VERSION_1 ACCESS_PLATFORM
data queue range 256-4096
default queue size 256
When you create the vdpa device you can specify the requested value:
$ vdpa dev add name vdpa-a mgmtdev auxiliary/mlx5_core.sf.6
max_vqp 1 mtu 9000
queue_size 1024
A follow up question: isn't it enough to control the size
from qemu? do we need ability to control it at the kernel level?
Right, I think today we can optionally control the queue size from qemu
via
rx_queue_size or tx_queue_size, but it has a limit of 1024 (btw why it has
such limit, which is relatively lower in my opinion). I think what was
missing for QEMU is to query the max number of queue size that the
hardware
can support from the backend.
I agree that ethtool is the way to go.
BTW, Si-Wei, can you point to the code that limits the configuration to 1024?
It's in QEMU's virtio_net_device_realize():
virtio_net_set_config_size(n, n->host_features);
virtio_init(vdev, "virtio-net", VIRTIO_ID_NET, n->config_size);
/*
* We set a lower limit on RX queue size to what it always was.
* Guests that want a smaller ring can always resize it without
* help from us (using virtio 1 and up).
*/
if (n->net_conf.rx_queue_size < VIRTIO_NET_RX_QUEUE_MIN_SIZE ||
n->net_conf.rx_queue_size > VIRTQUEUE_MAX_SIZE ||
!is_power_of_2(n->net_conf.rx_queue_size)) {
error_setg(errp, "Invalid rx_queue_size (= %" PRIu16 "), "
"must be a power of 2 between %d and %d.",
n->net_conf.rx_queue_size, VIRTIO_NET_RX_QUEUE_MIN_SIZE,
VIRTQUEUE_MAX_SIZE);
-Siwei
And if ethtool does not provide a way to show the max we can add this support in the future.
-Siwei
okay sure. my question is how important is it to control it in the
kernel?
I don't have a specific use case for that (in kernel)
-Siwei
_______________________________________________
Virtualization mailing list
Virtualization@xxxxxxxxxxxxxxxxxxxxxxxxxx
https://lists.linuxfoundation.org/mailman/listinfo/virtualization