Re: [PATCH 2/6] vDPA/ifcvf: support userspace to query features and MQ of a management device

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thu, Jun 2, 2022 at 10:48 AM Zhu Lingshan <lingshan.zhu@xxxxxxxxx> wrote:
>
> Adapting to current netlink interfaces, this commit allows userspace
> to query feature bits and MQ capability of a management device.
>
> Signed-off-by: Zhu Lingshan <lingshan.zhu@xxxxxxxxx>
> ---
>  drivers/vdpa/ifcvf/ifcvf_base.c | 12 ++++++++++++
>  drivers/vdpa/ifcvf/ifcvf_base.h |  1 +
>  drivers/vdpa/ifcvf/ifcvf_main.c |  3 +++
>  3 files changed, 16 insertions(+)
>
> diff --git a/drivers/vdpa/ifcvf/ifcvf_base.c b/drivers/vdpa/ifcvf/ifcvf_base.c
> index 6bccc8291c26..7be703b5d1f4 100644
> --- a/drivers/vdpa/ifcvf/ifcvf_base.c
> +++ b/drivers/vdpa/ifcvf/ifcvf_base.c
> @@ -341,6 +341,18 @@ int ifcvf_set_vq_state(struct ifcvf_hw *hw, u16 qid, u16 num)
>         return 0;
>  }
>
> +u16 ifcvf_get_max_vq_pairs(struct ifcvf_hw *hw)
> +{
> +       struct virtio_net_config __iomem *config;
> +       u16 val, mq;
> +
> +       config  = (struct virtio_net_config __iomem *)hw->dev_cfg;

Any reason we need the cast here? (cast from void * seems not necessary).

> +       val = vp_ioread16((__le16 __iomem *)&config->max_virtqueue_pairs);

I don't see any __le16 cast for the callers of vp_ioread16, anything
make max_virtqueue_pairs different here?

Thanks

> +       mq = le16_to_cpu((__force __le16)val);
> +
> +       return mq;
> +}
> +
>  static int ifcvf_hw_enable(struct ifcvf_hw *hw)
>  {
>         struct virtio_pci_common_cfg __iomem *cfg;
> diff --git a/drivers/vdpa/ifcvf/ifcvf_base.h b/drivers/vdpa/ifcvf/ifcvf_base.h
> index f5563f665cc6..d54a1bed212e 100644
> --- a/drivers/vdpa/ifcvf/ifcvf_base.h
> +++ b/drivers/vdpa/ifcvf/ifcvf_base.h
> @@ -130,6 +130,7 @@ u64 ifcvf_get_hw_features(struct ifcvf_hw *hw);
>  int ifcvf_verify_min_features(struct ifcvf_hw *hw, u64 features);
>  u16 ifcvf_get_vq_state(struct ifcvf_hw *hw, u16 qid);
>  int ifcvf_set_vq_state(struct ifcvf_hw *hw, u16 qid, u16 num);
> +u16 ifcvf_get_max_vq_pairs(struct ifcvf_hw *hw);
>  struct ifcvf_adapter *vf_to_adapter(struct ifcvf_hw *hw);
>  int ifcvf_probed_virtio_net(struct ifcvf_hw *hw);
>  u32 ifcvf_get_config_size(struct ifcvf_hw *hw);
> diff --git a/drivers/vdpa/ifcvf/ifcvf_main.c b/drivers/vdpa/ifcvf/ifcvf_main.c
> index 4366320fb68d..0c3af30b297e 100644
> --- a/drivers/vdpa/ifcvf/ifcvf_main.c
> +++ b/drivers/vdpa/ifcvf/ifcvf_main.c
> @@ -786,6 +786,9 @@ static int ifcvf_vdpa_dev_add(struct vdpa_mgmt_dev *mdev, const char *name,
>         vf->hw_features = ifcvf_get_hw_features(vf);
>         vf->config_size = ifcvf_get_config_size(vf);
>
> +       ifcvf_mgmt_dev->mdev.max_supported_vqs = ifcvf_get_max_vq_pairs(vf);

Btw, I think current IFCVF doesn't support the provisioning of a
$max_qps that is smaller than what hardware did.

Then I wonder if we need a min_supported_vqs attribute or doing
mediation in the ifcvf parent.

Thanks

> +       ifcvf_mgmt_dev->mdev.supported_features = vf->hw_features;
> +
>         adapter->vdpa.mdev = &ifcvf_mgmt_dev->mdev;
>         ret = _vdpa_register_device(&adapter->vdpa, vf->nr_vring);
>         if (ret) {
> --
> 2.31.1
>

_______________________________________________
Virtualization mailing list
Virtualization@xxxxxxxxxxxxxxxxxxxxxxxxxx
https://lists.linuxfoundation.org/mailman/listinfo/virtualization



[Index of Archives]     [KVM Development]     [Libvirt Development]     [Libvirt Users]     [CentOS Virtualization]     [Netdev]     [Ethernet Bridging]     [Linux Wireless]     [Kernel Newbies]     [Security]     [Linux for Hams]     [Netfilter]     [Bugtraq]     [Yosemite Forum]     [MIPS Linux]     [ARM Linux]     [Linux RAID]     [Linux Admin]     [Samba]

  Powered by Linux