Re: Plan for /dev/ioasid RFC v2

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Fri, Jun 18, 2021 at 01:21:47PM +0800, Lu Baolu wrote:
> Hi David,
> 
> On 6/17/21 1:22 PM, David Gibson wrote:
> > > The iommu_group can guarantee the isolation among different physical
> > > devices (represented by RIDs). But when it comes to sub-devices (ex. mdev or
> > > vDPA devices represented by RID + SSID), we have to rely on the
> > > device driver for isolation. The devices which are able to generate sub-
> > > devices should either use their own on-device mechanisms or use the
> > > platform features like Intel Scalable IOV to isolate the sub-devices.
> > This seems like a misunderstanding of groups.  Groups are not tied to
> > any PCI meaning.  Groups are the smallest unit of isolation, no matter
> > what is providing that isolation.
> > 
> > If mdevs are isolated from each other by clever software, even though
> > they're on the same PCI device they are in different groups from each
> > other*by definition*.  They are also in a different group from their
> > parent device (however the mdevs only exist when mdev driver is
> > active, which implies that the parent device's group is owned by the
> > kernel).
> 
> 
> You are right. This is also my understanding of an "isolation group".
> 
> But, as I understand it, iommu_group is only the isolation group visible
> to IOMMU. When we talk about sub-devices (sw-mdev or mdev w/ pasid),
> only the device and device driver knows the details of isolation, hence
> iommu_group could not be extended to cover them. The device drivers
> should define their own isolation groups.

So, "iommu group" isn't a perfect name.  It came about because
originally the main mechanism for isolation was the IOMMU, so it was
typically the IOMMU's capabilities that determined if devices were
isolated.  However it was always known that there could be other
reasons for failure of isolation.  To simplify the model we decided
that we'd put things into the same group if they were non-isolated for
any reason.

The kernel has no notion of "isolation group" as distinct from "iommu
group".  What are called iommu groups in the kernel now *are*
"isolation groups" and that was always the intention - it's just not a
great name.

> Otherwise, the device driver has to fake an iommu_group and add hacky
> code to link the related IOMMU elements (iommu device, domain, group
> etc.) together. Actually this is part of the problem that this proposal
> tries to solve.

Yeah, that's not ideal.

> > > Under above conditions, different sub-device from a same RID device
> > > could be able to use different IOASID. This seems to means that we can't
> > > support mixed mode where, for example, two RIDs share an iommu_group and
> > > one (or both) of them have sub-devices.
> > That doesn't necessarily follow.  mdevs which can be successfully
> > isolated by their mdev driver are in a different group from their
> > parent device, and therefore need not be affected by whether the
> > parent device shares a group with some other physical device.  They
> > *might*  be, but that's up to the mdev driver to determine based on
> > what it can safely isolate.
> > 
> 
> If we understand it as multiple levels of isolation, can we classify the
> devices into the following categories?
> 
> 1) Legacy devices
>    - devices without device-level isolation
>    - multiple devices could sit in a single iommu_group
>    - only a single I/O address space could be bound to IOMMU

I'm not really clear on what that last statement means.

> 2) Modern devices
>    - devices capable of device-level isolation

This will *typically* be true of modern devices, but I don't think we
can really make it a hard API distinction.  Legacy or buggy bridges
can force modern devices into the same group as each other.  Modern
devices are not immune from bugs which would force lack of isolation
(e.g. forgotten debug registers on function 0 which affect other
functions).

>    - able to have subdevices
>    - self-isolated, hence not share iommu_group with others
>    - multiple I/O address spaces could be bound to IOMMU
> 
> For 1), all devices in an iommu_group should be bound to a single
> IOASID; The isolation is guaranteed by an iommu_group.
> 
> For 2) a single device could be bound to multiple IOASIDs with each sub-
> device corresponding to an IOASID. The isolation of each subdevice is
> guaranteed by the device driver.
> 
> Best regards,
> baolu
> 

-- 
David Gibson			| I'll have my music baroque, and my code
david AT gibson.dropbear.id.au	| minimalist, thank you.  NOT _the_ _other_
				| _way_ _around_!
http://www.ozlabs.org/~dgibson

Attachment: signature.asc
Description: PGP signature


[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]

  Powered by Linux