Re: [RFC 01/29] nvkm/vgpu: introduce NVIDIA vGPU support prelude

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Sun, Sep 22, 2024 at 05:49:23AM -0700, Zhi Wang wrote:
> NVIDIA GPU virtualization is a technology that allows multiple virtual
> machines (VMs) to share the power of a single GPU, enabling greater
> flexibility, efficiency, and cost-effectiveness in data centers and cloud
> environments.
> 
> The first step of supporting NVIDIA vGPU in nvkm is to introduce the
> necessary vGPU data structures and functions to hook into the
> (de)initialization path of nvkm.
> 
> Introduce NVIDIA vGPU data structures and functions hooking into the
> the (de)initialization path of nvkm and support the following patches.
> 
> Cc: Neo Jia <cjia@xxxxxxxxxx>
> Cc: Surath Mitra <smitra@xxxxxxxxxx>
> Signed-off-by: Zhi Wang <zhiw@xxxxxxxxxx>

Some minor comments that are a hint you all aren't running checkpatch on
your code...

> --- /dev/null
> +++ b/drivers/gpu/drm/nouveau/include/nvkm/vgpu_mgr/vgpu_mgr.h
> @@ -0,0 +1,17 @@
> +/* SPDX-License-Identifier: MIT */

Wait, what?  Why?  Ick.  You all also forgot the copyright line :(

> --- /dev/null
> +++ b/drivers/gpu/drm/nouveau/nvkm/vgpu_mgr/vgpu_mgr.c
> @@ -0,0 +1,76 @@
> +/* SPDX-License-Identifier: MIT */
> +#include <core/device.h>
> +#include <core/pci.h>
> +#include <vgpu_mgr/vgpu_mgr.h>
> +
> +static bool support_vgpu_mgr = false;

A global variable for the whole system?  Are you sure that will work
well over time?  Why isn't this a per-device thing?

> +module_param_named(support_vgpu_mgr, support_vgpu_mgr, bool, 0400);

This is not the 1990's, please never add new module parameters, use
per-device variables.  And no documentation?  That's not ok either even
if you did want to have this.

> +static inline struct pci_dev *nvkm_to_pdev(struct nvkm_device *device)
> +{
> +	struct nvkm_device_pci *pci = container_of(device, typeof(*pci),
> +						   device);
> +
> +	return pci->pdev;
> +}
> +
> +/**
> + * nvkm_vgpu_mgr_is_supported - check if a platform support vGPU
> + * @device: the nvkm_device pointer
> + *
> + * Returns: true on supported platform which is newer than ADA Lovelace
> + * with SRIOV support.
> + */
> +bool nvkm_vgpu_mgr_is_supported(struct nvkm_device *device)
> +{
> +	struct pci_dev *pdev = nvkm_to_pdev(device);
> +
> +	if (!support_vgpu_mgr)
> +		return false;
> +
> +	return device->card_type == AD100 &&  pci_sriov_get_totalvfs(pdev);

checkpatch please.

And "AD100" is an odd #define, as you know.

> +}
> +
> +/**
> + * nvkm_vgpu_mgr_is_enabled - check if vGPU support is enabled on a PF
> + * @device: the nvkm_device pointer
> + *
> + * Returns: true if vGPU enabled.
> + */
> +bool nvkm_vgpu_mgr_is_enabled(struct nvkm_device *device)
> +{
> +	return device->vgpu_mgr.enabled;

What happens if this changes right after you look at it?


> +}
> +
> +/**
> + * nvkm_vgpu_mgr_init - Initialize the vGPU manager support
> + * @device: the nvkm_device pointer
> + *
> + * Returns: 0 on success, -ENODEV on platforms that are not supported.
> + */
> +int nvkm_vgpu_mgr_init(struct nvkm_device *device)
> +{
> +	struct nvkm_vgpu_mgr *vgpu_mgr = &device->vgpu_mgr;
> +
> +	if (!nvkm_vgpu_mgr_is_supported(device))
> +		return -ENODEV;
> +
> +	vgpu_mgr->nvkm_dev = device;
> +	vgpu_mgr->enabled = true;
> +
> +	pci_info(nvkm_to_pdev(device),
> +		 "NVIDIA vGPU mananger support is enabled.\n");

When drivers work properly, they are quiet.

Why can't you see this all in the sysfs tree instead to know if support
is there or not?  You all are properly tieing in your "sub driver" logic
to the driver model, right?  (hint, I don't think so as it looks like
that isn't happening, but I could be missing it...)

thanks,

greg k-h




[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]

  Powered by Linux