Re: [PATCH v7 00/22] Generic DT bindings for PCI IOMMUs and ARM SMMU

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




Hi Eric,

On 15/09/16 17:46, Auger Eric wrote:
[...]
> Hum OK; thanks for the explanation. With that implementation however,
> don't we face back the issue we encountered in early stage of default
> domain implementation:
> 
> With this sample config (AMD overdrive + I350-T2 + 2VFs per PF) I fill
> the 8 context banks. Whereas practically we didn't need them before?
> 
> 00:00.0 0600: 1022:1a00
> 	Subsystem: 1022:1a00
> 00:02.0 0600: 1022:1a01
> 00:02.2 0604: 1022:1a02
> 	Kernel driver in use: pcieport
> 01:00.0 0200: 8086:1521 (rev 01)
> 	Subsystem: 8086:0002
> 	Kernel driver in use: igb
> 01:00.1 0200: 8086:1521 (rev 01)
> 	Subsystem: 8086:0002
> 	Kernel driver in use: igb
> 01:10.0 0200: 8086:1520 (rev 01) -> context 5
> 	Subsystem: 8086:0002
> 	Kernel driver in use: vfio-pci
> 01:10.1 0200: 8086:1520 (rev 01) -> context 7
> 	Subsystem: 8086:0002
> 	Kernel driver in use: igbvf
> 01:10.4 0200: 8086:1520 (rev 01) -> context 6
> 	Subsystem: 8086:0002
> 	Kernel driver in use: igbvf
> 01:10.5 0200: 8086:1520 (rev 01) -> shortage
> 	Subsystem: 8086:0002
> 	Kernel driver in use: igbvf
> 
> So I can't even do passthrough anymore with that config. Is there
> anything wrong in my setup/understanding?

It's kind of hard to avoid, really - people want DMA ops support (try
plugging a card which can only do 32-bit DMA into that Seattle, for
instance); DMA ops need default domains; default domains are allocated
per group; each domain requires a context bank to back it; thus if you
have 9 groups and 8 context banks you're in a corner. It would relieve
the pressure to have a single default domain per SMMU, but we've no way
to do that due to the way the iommu_domain_alloc() API is intended to work.

Ultimately, it's a hardware limitation of that platform - plug in a card
with 16 VFs with ACS, and either way you're stuck. There are a number of
bodges I can think of that would make your specific situation work, but
none of them are really sufficiently general to consider upstreaming.
The most logical thing to do right now, if you were happy without DMA
ops using the old binding before, is to keep using the old binding (just
fixing your DT with #stream-id-cells=0 on the host controller so as not
to create the fake aliasing problem). Hopefully future platforms will be
in a position to couple their PCI host controllers to an IOMMU which is
actually designed to support a PCI host controller.

What I probably will do, though, since we have the functionality in
place for the sake of the old binding, and I think there are other folks
who want PCI iommu-map support but would prefer not to bother with DMA
ops on the host, is add a command-line option to disable DMA domains
even for the generic bindings.

Robin.
--
To unsubscribe from this list: send the line "unsubscribe devicetree" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html



[Index of Archives]     [Device Tree Compilter]     [Device Tree Spec]     [Linux Driver Backports]     [Video for Linux]     [Linux USB Devel]     [Linux PCI Devel]     [Linux Audio Users]     [Linux Kernel]     [Linux SCSI]     [XFree86]     [Yosemite Backpacking]
  Powered by Linux