On 12/21/2021 11:54 PM, Eli Cohen wrote:
On Tue, Dec 21, 2021 at 11:29:36PM -0800, Si-Wei Liu wrote:
On 12/21/2021 11:10 PM, Eli Cohen wrote:
On Wed, Dec 22, 2021 at 09:03:37AM +0200, Parav Pandit wrote:
From: Eli Cohen <elic@xxxxxxxxxx>
Sent: Wednesday, December 22, 2021 12:17 PM
--- a/drivers/vdpa/vdpa.c
+++ b/drivers/vdpa/vdpa.c
@@ -507,6 +507,9 @@ static int vdpa_mgmtdev_fill(const struct
vdpa_mgmt_dev *mdev, struct sk_buff *m
err = -EMSGSIZE;
goto msg_err;
}
+ if (nla_put_u16(msg, VDPA_ATTR_DEV_MGMTDEV_MAX_VQS,
+ mdev->max_supported_vqs))
It still needs a default value when the field is not explicitly
filled in by the driver.
Unlikely. This can be optional field to help user decide device max limit.
When max_supported_vqs is set to zero. Vdpa should omit exposing it to user
space.
This is not about what you expose to userspace. It's about the number of VQs
you want to create for a specific instance of vdpa.
This value on mgmtdev indicates that a given mgmt device supports creating a vdpa device who can have maximum VQs of N.
User will choose to create VQ with VQs <= N depending on its vcpu and other factors.
You're right.
So each vendor needs to put there their value.
If I understand Parav correctly, he was suggesting not to expose
VDPA_ATTR_DEV_MGMTDEV_MAX_VQS to userspace if seeing (max_supported_vqs ==
0) from the driver.
I can see the reasoning, but maybe we should leave it as zero which
means it was not reported. The user will then need to guess. I believe
other vendors will follow with an update so this to a real value.
Unless you place a check in the vdpa core to enforce it on vdpa
creation, otherwise it's very likely to get ignored by other vendors.
But meanwhile, I do wonder how users tell apart multiqueue supporting parent
from the single queue mgmtdev without getting the aid from this field. I
hope the answer won't be to create a vdpa instance to try.
Do you see a scenario that an admin decides to not instantiate vdpa just
because it does not support MQ?
Yes, there is. If the hardware doesn't support MQ, the provisioning tool
in the mgmt software will need to fallback to software vhost backend
with mq=on. At the time the tool is checking out, it doesn't run with
root privilege.
And it the management device reports it does support, there's still no
guarantee you'll end up with a MQ net device.
I'm not sure I follow. Do you mean it may be up to the guest feature
negotiation? But the device itself is still MQ capable, isn't it?
Thanks,
-Siwei
-Siwei
This is what is exposed to the user to decide the upper bound.
There has been some talk/patches of rdma virtio device.
I anticipate such device to support more than 64K queues by nature of rdma.
It is better to keep max_supported_vqs as u32.
Why not add it when we have it?
Sure, with that approach we will end up adding two fields (current u16 and later another u32) due to smaller bit width of current one.
Either way is fine. Michael was suggesting similar higher bit-width in other patches, so bringing up here for this field on how he sees it.
I can use u32 then.
_______________________________________________
Virtualization mailing list
Virtualization@xxxxxxxxxxxxxxxxxxxxxxxxxx
https://lists.linuxfoundation.org/mailman/listinfo/virtualization