Hi Daniel,
On 2020/4/13 10:25, Daniel Drake wrote:
On Fri, Apr 10, 2020 at 9:22 AM Lu Baolu <baolu.lu@xxxxxxxxxxxxxxx> wrote:
This is caused by the fragile private domain implementation. We are in
process of removing it by enhancing the iommu subsystem with per-group
default domain.
https://www.spinics.net/lists/iommu/msg42976.html
So ultimately VMD subdevices should have their own per-device iommu data
and support per-device dma ops.
Interesting. There's also this patchset you posted:
[PATCH 00/19] [PULL REQUEST] iommu/vt-d: patches for v5.7
https://lists.linuxfoundation.org/pipermail/iommu/2020-April/042967.html
(to be pushed out to 5.8)
Both are trying to solve a same problem.
I have sync'ed with Joerg. This patch set will be replaced with Joerg's
proposal due to a race concern between domain switching and driver
binding. I will rebase all vt-d patches in this set on top of Joerg's
change.
Best regards,
baolu
In there you have:
iommu/vt-d: Don't force 32bit devices to uses DMA domain
which seems to clash with the approach being explored in this thread.
And:
iommu/vt-d: Apply per-device dma_ops
This effectively solves the trip point that caused me to open these
discussions, where intel_map_page() -> iommu_need_mapping() would
incorrectly determine that a intel-iommu DMA mapping was needed for a
PCI subdevice running in identity mode. After this patch, a PCI
subdevice in identity mode uses the default system dma_ops and
completely avoids intel-iommu.
So that solves the issues I was looking at. Jon, you might want to
check if the problems you see are likewise solved for you by these
patches.
I didn't try Joerg's iommu group rework yet as it conflicts with those
patches above.
Daniel