Re: [Qemu-devel] [RFC PATCH v3 2/3] VFIO driver for vGPU device

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




On 5/4/2016 4:13 AM, Alex Williamson wrote:
> On Tue, 3 May 2016 00:10:40 +0530

>>  obj-$(CONFIG_VGPU)			+= vgpu.o
>> +obj-$(CONFIG_VGPU_VFIO)                 += vgpu_vfio.o
>
> This is where we should add a new Kconfig entry for VGPU_VFIO, nothing
> in patch 1 has any vfio dependency.  Perhaps it should also depend on
> VFIO_PCI rather than VFIO since you are getting very PCI specific below.

VGPU_VFIO depends on VFIO but is independent of VFIO_PCI. VGPU_VFIO uses VFIO apis defined for PCI devices and uses common #defines but that doesn't mean it depends on VFIO_PCI.
I'll move Kconfig entry for VGPU_VFIO here in next version of patch.

>> +#define VFIO_PCI_OFFSET_SHIFT   40
>> +
>> +#define VFIO_PCI_OFFSET_TO_INDEX(off)	(off >> VFIO_PCI_OFFSET_SHIFT)
>> +#define VFIO_PCI_INDEX_TO_OFFSET(index) ((u64)(index) << VFIO_PCI_OFFSET_SHIFT)
>> +#define VFIO_PCI_OFFSET_MASK	(((u64)(1) << VFIO_PCI_OFFSET_SHIFT) - 1)
>
> Change the name of these from vfio-pci please or shift code around to
> use them directly.  You're certainly free to redefine these, but using
> the same name is confusing.
>

I'll move these defines to common location.


>> +	if (gpu_dev->ops->vgpu_bar_info)
>> +		ret = gpu_dev->ops->vgpu_bar_info(vgpu_dev, index, bar_info);
>
> vgpu_bar_info is already optional, further validating that the vgpu
> core is not PCI specific.

It is not optional if vgpu_vfio module should work on the device. If vgpu_bar_info is not provided by vendor driver, open() would fail. vgpu_vfio expect PCI device. Also need to PCI device validation.


>
> Let's not neglect ioport BARs here, IO_MASK is different.
>

vgpu_device is virtual device, it is not going to drive VGA signals. Nvidia vGPU would not support IO BAR.


>> +	vdev->refcnt--;
>> +	if (!vdev->refcnt) {
>> +		memset(&vdev->bar_info, 0, sizeof(vdev->bar_info));
>
> Why?

vfio_vgpu_device is allocated when vgpu device is created by vgpu core, then QEMU/VMM call open() on that device, where vdev->bar_info is populated and allocates vconfig. In teardown path, QEMU/VMM call close() on the device and vfio_vgpu_device is destroyed when vgpu device is destroyed by vgpu core.

If QEMU/VMM restarts and in that case vgpu device is not destroyed, vdev->bar_info should be cleared to fetch it again from vendor driver. It should not keep any stale addresses.

>> +	if (irq_type == VFIO_PCI_MSIX_IRQ_INDEX)
>> +		return -1;
>
> How are we going to expand the API later for it?  Shouldn't this just
> be a passthrough to a gpu_devices_ops.vgpu_vfio_get_irq_info callback?

Vendor driver convey interrupt type by defining capabilities in config space. I don't think we should add new callback for it.


>> +		memcpy((void *)(vdev->vconfig + pos), (void *)user_data, count);
>
> So write is expected to user_data to allow only the writable bits to be
> changed?  What's really being saved in the vconfig here vs the vendor
> vgpu driver?  It seems like we're only using it to cache the BAR
> values, but we're not providing the BAR emulation here, which seems
> like one of the few things we could provide so it's not duplicated in
> every vendor driver.  But then we only need a few u32s to do that, not
> all of config space.
>

Vendor driver should emulate config space. It is not just BAR addresses, vendor driver should add the capabilities supported by its vGPU device.


>> +
>> +		if (gpu_dev->ops->write) {
>> +			ret = gpu_dev->ops->write(vgpu_dev,
>> +						  user_data,
>> +						  count,
>> +						  vgpu_emul_space_mmio,
>> +						  pos);
>> +		}
>
> What's the usefulness in a vendor driver that doesn't provide
> read/write?

The checks are to avoid NULL pointer deference if this callbacks are not provided. Whether it will work or not that completely depends on vendor driver stack in host and guest.

>> +	case VFIO_PCI_ROM_REGION_INDEX:
>> +	case VFIO_PCI_VGA_REGION_INDEX:
>
> Wait a sec, who's doing the VGA emulation?  We can't be claiming to
> support a VGA region and then fail to provide read/write access to it
> like we said it has.
>

Nvidia vGPU doesn't support IO BAR and ROM BAR. But I can move these cases to
case VFIO_PCI_BAR0_REGION_INDEX ... VFIO_PCI_BAR5_REGION_INDEX:

So that if vendor driver support IO BAR or ROM BAR emulation, it would be same as other BARs.


>> +	ret = remap_pfn_range(vma, virtaddr, pgoff, req_size, pg_prot);
>
> So not supporting validate_map_request() means that the user can
> directly mmap BARs of the host GPU and as shown below, we assume a 1:1
> mapping of vGPU BAR to host GPU BAR.  Is that ever valid in a vGPU
> scenario or should this callback be required?

Yes, if restrictions are imposed such that onle one vGPU device can be created on one physical GPU, i.e. 1:1 vGPU to host GPU.

>  It's not clear to me how
> the vendor driver determines what this maps to, do they compare it to
> the physical device's own BAR addresses?
>

Yes.

Thanks,
Kirti


--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html



[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]
  Powered by Linux