On 12/14/2021 12:22 AM, Eli Cohen wrote:
On Mon, Dec 13, 2021 at 05:00:18PM -0800, Si-Wei Liu wrote:
On 12/13/2021 6:42 AM, Eli Cohen wrote:
Add max_supported_vqs field to struct vdpa_mgmt_dev. Upstream drivers
need to feel this value according to the device capabilities.
This value is reported back in a netlink message when showing a device.
Example:
$ vdpa dev show
vdpa-a: type network mgmtdev auxiliary/mlx5_core.sf.1 vendor_id 5555 \
max_vqp 3 max_vq_size 256 max_supported_vqs 256
Signed-off-by: Eli Cohen <elic@xxxxxxxxxx>
---
drivers/vdpa/vdpa.c | 2 ++
include/linux/vdpa.h | 1 +
include/uapi/linux/vdpa.h | 1 +
3 files changed, 4 insertions(+)
diff --git a/drivers/vdpa/vdpa.c b/drivers/vdpa/vdpa.c
index 130a8d4aeaed..b9dd693146be 100644
--- a/drivers/vdpa/vdpa.c
+++ b/drivers/vdpa/vdpa.c
@@ -695,6 +695,8 @@ vdpa_dev_fill(struct vdpa_device *vdev, struct sk_buff *msg, u32 portid, u32 seq
goto msg_err;
if (nla_put_u16(msg, VDPA_ATTR_DEV_MIN_VQ_SIZE, min_vq_size))
goto msg_err;
+ if (nla_put_u16(msg, VDPA_ATTR_DEV_MAX_DEV_VQS, vdev->mdev->max_supported_vqs))
+ goto msg_err;
What is the default value if vendor driver doesn't expose this value?
I think each vendor should put a value here. If you don't, you can still
add a vdpa device but you'll have to guess.
Alternatively, I can provide a patch that sets this value to 2 for all
vendors.
And it
doesn't seem this is something to stash on struct vdpa_mgmt_dev, only some
vdpa vendor for network drive may need to expose it; if otherwise not
exposed we can assume it's 32767 by the spec.
I am not sure there's any device capable of so many VQs. In any case, I
think use 2 as defauly is better since any device can support that.
OK, default to 2 is fine (for net).
A generic vdpa op
(get_device_capability?)
The point is that you need this info to create a vdpa device so I don't
see how it can be a vdpa device operation.
I wonder if this info should belong to mgmtdev rather than vdpa dev? It
should be visible before user ever attempts to create a vdpa device.
to query device capability might be better I guess
(define a VDPA_CAPAB_NET_MAX_VQS subtype to get it).
Why limit this capablity only for net devices? Any kind of vdpa device
may want to report this capability.
I thought what you report here is the max number for data queues the
device can support, no? The control or event queue that is emulated in
userspace isn't much interesting to users IMHO.
User needs to take the hint from this value to create vdpa net device
and specify a proper max_vqp value. It's rather counter intuitive if
what they see is not what they would use.
Thanks,
-Siwei
-Siwei
genlmsg_end(msg, hdr);
return 0;
diff --git a/include/linux/vdpa.h b/include/linux/vdpa.h
index 9245dfbf1b08..72ea8ad7ba21 100644
--- a/include/linux/vdpa.h
+++ b/include/linux/vdpa.h
@@ -444,6 +444,7 @@ struct vdpa_mgmt_dev {
const struct vdpa_mgmtdev_ops *ops;
const struct virtio_device_id *id_table;
u64 config_attr_mask;
+ u16 max_supported_vqs;
struct list_head list;
};
diff --git a/include/uapi/linux/vdpa.h b/include/uapi/linux/vdpa.h
index 23b854e3e5e2..1b758d77e228 100644
--- a/include/uapi/linux/vdpa.h
+++ b/include/uapi/linux/vdpa.h
@@ -42,6 +42,7 @@ enum vdpa_attr {
VDPA_ATTR_DEV_MAX_VQ_SIZE, /* u16 */
VDPA_ATTR_DEV_MIN_VQ_SIZE, /* u16 */
VDPA_ATTR_DEV_FLAGS, /* u64 */
+ VDPA_ATTR_DEV_MAX_DEV_VQS, /* u16 */
VDPA_ATTR_DEV_NET_CFG_MACADDR, /* binary */
VDPA_ATTR_DEV_NET_STATUS, /* u8 */
_______________________________________________
Virtualization mailing list
Virtualization@xxxxxxxxxxxxxxxxxxxxxxxxxx
https://lists.linuxfoundation.org/mailman/listinfo/virtualization