RE: [RFC 11/20] iommu/iommufd: Add IOMMU_IOASID_ALLOC/FREE

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



> From: David Gibson
> Sent: Friday, October 1, 2021 2:11 PM
> 
> On Sun, Sep 19, 2021 at 02:38:39PM +0800, Liu Yi L wrote:
> > This patch adds IOASID allocation/free interface per iommufd. When
> > allocating an IOASID, userspace is expected to specify the type and
> > format information for the target I/O page table.
> >
> > This RFC supports only one type (IOMMU_IOASID_TYPE_KERNEL_TYPE1V2),
> > implying a kernel-managed I/O page table with vfio type1v2 mapping
> > semantics. For this type the user should specify the addr_width of
> > the I/O address space and whether the I/O page table is created in
> > an iommu enfore_snoop format. enforce_snoop must be true at this point,
> > as the false setting requires additional contract with KVM on handling
> > WBINVD emulation, which can be added later.
> >
> > Userspace is expected to call IOMMU_CHECK_EXTENSION (see next patch)
> > for what formats can be specified when allocating an IOASID.
> >
> > Open:
> > - Devices on PPC platform currently use a different iommu driver in vfio.
> >   Per previous discussion they can also use vfio type1v2 as long as there
> >   is a way to claim a specific iova range from a system-wide address space.
> >   This requirement doesn't sound PPC specific, as addr_width for pci
> devices
> >   can be also represented by a range [0, 2^addr_width-1]. This RFC hasn't
> >   adopted this design yet. We hope to have formal alignment in v1
> discussion
> >   and then decide how to incorporate it in v2.
> 
> Ok, there are several things we need for ppc.  None of which are
> inherently ppc specific and some of which will I think be useful for
> most platforms.  So, starting from most general to most specific
> here's basically what's needed:
> 
> 1. We need to represent the fact that the IOMMU can only translate
>    *some* IOVAs, not a full 64-bit range.  You have the addr_width
>    already, but I'm entirely sure if the translatable range on ppc
>    (or other platforms) is always a power-of-2 size.  It usually will
>    be, of course, but I'm not sure that's a hard requirement.  So
>    using a size/max rather than just a number of bits might be safer.
> 
>    I think basically every platform will need this.  Most platforms
>    don't actually implement full 64-bit translation in any case, but
>    rather some smaller number of bits that fits their page table
>    format.
> 
> 2. The translatable range of IOVAs may not begin at 0.  So we need to
>    advertise to userspace what the base address is, as well as the
>    size.  POWER's main IOVA range begins at 2^59 (at least on the
>    models I know about).
> 
>    I think a number of platforms are likely to want this, though I
>    couldn't name them apart from POWER.  Putting the translated IOVA
>    window at some huge address is a pretty obvious approach to making
>    an IOMMU which can translate a wide address range without colliding
>    with any legacy PCI addresses down low (the IOMMU can check if this
>    transaction is for it by just looking at some high bits in the
>    address).
> 
> 3. There might be multiple translatable ranges.  So, on POWER the
>    IOMMU can typically translate IOVAs from 0..2GiB, and also from
>    2^59..2^59+<RAM size>.  The two ranges have completely separate IO
>    page tables, with (usually) different layouts.  (The low range will
>    nearly always be a single-level page table with 4kiB or 64kiB
>    entries, the high one will be multiple levels depending on the size
>    of the range and pagesize).
> 
>    This may be less common, but I suspect POWER won't be the only
>    platform to do something like this.  As above, using a high range
>    is a pretty obvious approach, but clearly won't handle older
>    devices which can't do 64-bit DMA.  So adding a smaller range for
>    those devices is again a pretty obvious solution.  Any platform
>    with an "IO hole" can be treated as having two ranges, one below
>    the hole and one above it (although in that case they may well not
>    have separate page tables

1-3 are common on all platforms with fixed reserved ranges. Current
vfio already reports permitted iova ranges to user via VFIO_IOMMU_
TYPE1_INFO_CAP_IOVA_RANGE and the user is expected to construct
maps only in those ranges. iommufd can follow the same logic for the
baseline uAPI.

For above cases a [base, max] hint can be provided by the user per
Jason's recommendation. It is a hint as no additional restriction is
imposed, since the kernel only cares about no violation on permitted
ranges that it reports to the user. Underlying iommu driver may use 
this hint to optimize e.g. deciding how many levels are used for
the kernel-managed page table according to max addr.

> 
> 4. The translatable ranges might not be fixed.  On ppc that 0..2GiB
>    and 2^59..whatever ranges are kernel conventions, not specified by
>    the hardware or firmware.  When running as a guest (which is the
>    normal case on POWER), there are explicit hypercalls for
>    configuring the allowed IOVA windows (along with pagesize, number
>    of levels etc.).  At the moment it is fixed in hardware that there
>    are only 2 windows, one starting at 0 and one at 2^59 but there's
>    no inherent reason those couldn't also be configurable.

If ppc iommu driver needs to configure hardware according to the 
specified ranges, then it requires more than a hint thus better be
conveyed via ppc specific cmd as Jason suggested.

> 
>    This will probably be rarer, but I wouldn't be surprised if it
>    appears on another platform.  If you were designing an IOMMU ASIC
>    for use in a variety of platforms, making the base address and size
>    of the translatable range(s) configurable in registers would make
>    sense.
> 
> 
> Now, for (3) and (4), representing lists of windows explicitly in
> ioctl()s is likely to be pretty ugly.  We might be able to avoid that,
> for at least some of the interfaces, by using the nested IOAS stuff.
> One way or another, though, the IOASes which are actually attached to
> devices need to represent both windows.
> 
> e.g.
> Create a "top-level" IOAS <A> representing the device's view.  This
> would be either TYPE_KERNEL or maybe a special type.  Into that you'd
> make just two iomappings one for each of the translation windows,
> pointing to IOASes <B> and <C>.  IOAS <B> and <C> would have a single
> window, and would represent the IO page tables for each of the
> translation windows.  These could be either TYPE_KERNEL or (say)
> TYPE_POWER_TCE for a user managed table.  Well.. in theory, anyway.
> The way paravirtualization on POWER is done might mean user managed
> tables aren't really possible for other reasons, but that's not
> relevant here.
> 
> The next problem here is that we don't want userspace to have to do
> different things for POWER, at least not for the easy case of a
> userspace driver that just wants a chunk of IOVA space and doesn't
> really care where it is.
> 
> In general I think the right approach to handle that is to
> de-emphasize "info" or "query" interfaces.  We'll probably still need
> some for debugging and edge cases, but in the normal case userspace
> should just specify what it *needs* and (ideally) no more with
> optional hints, and the kernel will either supply that or fail.
> 
> e.g. A simple userspace driver would simply say "I need an IOAS with
> at least 1GiB of IOVA space" and the kernel says "Ok, you can use
> 2^59..2^59+2GiB".  qemu, emulating the POWER vIOMMU might say "I need
> an IOAS with translatable addresses from 0..2GiB with 4kiB page size
> and from 2^59..2^59+1TiB with 64kiB page size" and the kernel would
> either say "ok", or "I can't do that".
> 

This doesn't work for other platforms, which don't have vIOMMU 
mandatory as on ppc. For those platforms, the initial address space
is GPA (for vm case) and Qemu needs to mark those GPA holes as 
reserved in firmware structure. I don't think anyone wants a tedious
try-and-fail process to figure out how many holes exists in a 64bit
address space...

Thanks
Kevin




[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]

  Powered by Linux