Re: [libvirt] [Qemu-devel] [PATCH v7 0/4] Add Mediated device support

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




On 03/09/2016 13:56, John Ferlan wrote:
> On 09/02/2016 05:48 PM, Paolo Bonzini wrote:
>> On 02/09/2016 20:33, Kirti Wankhede wrote:
>>> <Alex> We could even do:
>>>>>
>>>>> echo $UUID1:$GROUPA > create
>>>>>
>>>>> where $GROUPA is the group ID of a previously created mdev device into
>>>>> which $UUID1 is to be created and added to the same group.
>>> </Alex>
>>
>> >From the point of view of libvirt, I think I prefer Alex's idea.
>> <group> could be an additional element in the nodedev-create XML:
>>
>>     <device>
>>       <name>my-vgpu</name>
>>       <parent>pci_0000_86_00_0</parent>
>>       <capability type='mdev'>
>>         <type id='11'/>
>>         <uuid>0695d332-7831-493f-9e71-1c85c8911a08</uuid>
>>         <group>group1</group>
>>       </capability>
>>     </device>
>>
>> (should group also be a UUID?)
> 
> As long as create_group handles all the work and all libvirt does is
> call it, get the return status/error, and handle deleting the vGPU on
> error, then I guess it's doable.
> 
> Alternatively having multiple <type id='#'> in the XML and performing a
> single *mdev/create_group is an option.

I don't really like the idea of a single nodedev-create creating
multiple devices, but that would work too.

> That is, what is the "output" from create_group that gets added to the
> domain XML?  How is that found?

A new sysfs path is created, whose name depends on the UUID.  The UUID
is used in a <hostdev> element in the domain XML and the sysfs path
appears in the QEMU command line.  Kirti and Neo had examples in their
presentation at KVM Forum.

If you create multiple devices in the same group, they are added to the
same IOMMU group so they must be used by the same VM.  However they
don't have to be available from the beginning; they could be
hotplugged/hot-unplugged later, since from the point of view of the VM
those are just another PCI device.

> Also, once the domain is running can a
> vGPU be added to the group?  Removed?  What allows/prevents?

Kirti?... :)

In principle I don't think anything should block vGPUs from different
groups being added to the same VM, but I have to defer to Alex and Kirti
again on this.

>> Since John brought up the topic of minimal XML, in this case it will be
>> like this:
>>
>>     <device>
>>       <name>my-vgpu</name>
>>       <parent>pci_0000_86_00_0</parent>
>>       <capability type='mdev'>
>>         <type id='11'/>
>>       </capability>
>>     </device>
>>
>> The uuid will be autogenerated by libvirt and if there's no <group> (as
>> is common for VMs with only 1 vGPU) it will be a single-device group.
> 
> The <name> could be ignored as it seems existing libvirt code wants to
> generate a name via udevGenerateDeviceName for other devices. I haven't
> studied it long enough, but I believe that's how those pci_####* names
> created.

Yeah that makes sense.  So we get down to a minimal XML that has just
parent, and capability with type in it; additional elements could be
name (ignored anyway), and within capability uuid and group.

Thanks,

Paolo
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html



[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]
  Powered by Linux