Re: [Qemu-devel] [PATCH v7 0/4] Add Mediated device support

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Sat, 3 Sep 2016 22:04:56 +0530
Kirti Wankhede <kwankhede@xxxxxxxxxx> wrote:

> On 9/3/2016 3:18 AM, Paolo Bonzini wrote:
> > 
> > 
> > On 02/09/2016 20:33, Kirti Wankhede wrote:  
> >> <Alex> We could even do:  
> >>>>
> >>>> echo $UUID1:$GROUPA > create
> >>>>
> >>>> where $GROUPA is the group ID of a previously created mdev device into
> >>>> which $UUID1 is to be created and added to the same group.  
> >> </Alex>  
> > 
> > From the point of view of libvirt, I think I prefer Alex's idea.
> > <group> could be an additional element in the nodedev-create XML:
> > 
> >     <device>
> >       <name>my-vgpu</name>
> >       <parent>pci_0000_86_00_0</parent>
> >       <capability type='mdev'>
> >         <type id='11'/>
> >         <uuid>0695d332-7831-493f-9e71-1c85c8911a08</uuid>
> >         <group>group1</group>
> >       </capability>
> >     </device>
> > 
> > (should group also be a UUID?)
> >   
> 
> No, this should be a unique number in a system, similar to iommu_group.

Sorry, just trying to catch up on this thread after a long weekend.

We're talking about iommu groups here, we're not creating any sort of
parallel grouping specific to mdev devices.  This is why my example
created a device and then required the user to go find the group number
given to that device in order to create another device within the same
group.  iommu group numbering is not within the user's control and is
not a uuid.  libvirt can refer to the group as anything it wants in the
xml, but the host group number is allocated by the host, not under user
control, is not persistent.  libvirt would just be giving it a name to
know which devices are part of the same group.  Perhaps the runtime xml
would fill in the group number once created.

There were also a lot of unanswered questions in my proposal, it's not
clear that there's a standard algorithm for when mdev devices need to
be grouped together.  Should we even allow groups to span multiple host
devices?  Should they be allowed to span devices from different
vendors?

If we imagine a scenario of a group composed of a mix of Intel and
NVIDIA vGPUs, what happens when an Intel device is opened first?  The
NVIDIA driver wouldn't know about this, but it would know when the
first NVIDIA device is opened and be able to establish p2p for the
NVIDIA devices at that point.  Can we do what we need with that model?
What if libvirt is asked to hot-add an NVIDIA vGPU?  It would need to
do a create on the NVIDIA parent device with the existing group id, at
which point the NVIDIA vendor driver could fail the device create if
the p2p setup has already been done.  The Intel vendor driver might
allow it.  Similar to open, the last close of the mdev device for a
given vendor (which might not be the last close of mdev devices within
the group) would need to trigger the offline process for that vendor.

That all sounds well and good... here's the kicker: iommu groups
necessarily need to be part of the same iommu context, ie.
vfio container.  How do we deal with vIOMMUs within the guest when we
are intentionally forcing a set of devices within the same context?
This is why it's _very_ beneficial on the host to create iommu groups
with the smallest number of devices we can reasonably trust to be
isolated.  We're backing ourselves into a corner if we tell libvirt
that the standard process is to put all mdev devices into a single
group.  The grouping/startup issue is still unresolved in my head.
Thanks,

Alex

--
libvir-list mailing list
libvir-list@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/libvir-list



[Index of Archives]     [Virt Tools]     [Libvirt Users]     [Lib OS Info]     [Fedora Users]     [Fedora Desktop]     [Fedora SELinux]     [Big List of Linux Books]     [Yosemite News]     [KDE Users]     [Fedora Tools]