Re: [PATCH] drm/i915: Use per device iommu check

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




On 12/11/2021 00:58, Lu Baolu wrote:
On 11/11/21 11:18 PM, Tvrtko Ursulin wrote:

On 10/11/2021 14:37, Robin Murphy wrote:
On 2021-11-10 14:11, Tvrtko Ursulin wrote:

On 10/11/2021 12:35, Lu Baolu wrote:
On 2021/11/10 20:08, Tvrtko Ursulin wrote:

On 10/11/2021 12:04, Lu Baolu wrote:
On 2021/11/10 17:30, Tvrtko Ursulin wrote:

On 10/11/2021 07:12, Lu Baolu wrote:
Hi Tvrtko,

On 2021/11/9 20:17, Tvrtko Ursulin wrote:
From: Tvrtko Ursulin<tvrtko.ursulin@xxxxxxxxx>

On igfx + dgfx setups, it appears that intel_iommu=igfx_off option only disables the igfx iommu. Stop relying on global intel_iommu_gfx_mapped and probe presence of iommu domain per device to accurately reflect its
status.

Signed-off-by: Tvrtko Ursulin<tvrtko.ursulin@xxxxxxxxx>
Cc: Lu Baolu<baolu.lu@xxxxxxxxxxxxxxx>
---
Baolu, is my understanding here correct? Maybe I am confused by both intel_iommu_gfx_mapped and dmar_map_gfx being globals in the intel_iommu driver. But it certainly appears the setup can assign some iommu ops (and assign the discrete i915 to iommu group) when those two are set to off.

diff --git a/drivers/gpu/drm/i915/i915_drv.h b/drivers/gpu/drm/i915/i915_drv.h
index e967cd08f23e..9fb38a54f1fe 100644
--- a/drivers/gpu/drm/i915/i915_drv.h
+++ b/drivers/gpu/drm/i915/i915_drv.h
@@ -1763,26 +1763,27 @@ static inline bool run_as_guest(void)
  #define HAS_D12_PLANE_MINIMIZATION(dev_priv) (IS_ROCKETLAKE(dev_priv) || \
                            IS_ALDERLAKE_S(dev_priv))

-static inline bool intel_vtd_active(void)
+static inline bool intel_vtd_active(struct drm_i915_private *i915)
  {
-#ifdef CONFIG_INTEL_IOMMU
-    if (intel_iommu_gfx_mapped)
+    if (iommu_get_domain_for_dev(i915->drm.dev))
          return true;
-#endif

      /* Running as a guest, we assume the host is enforcing VT'd */
      return run_as_guest();
  }

Have you verified this change? I am afraid that
iommu_get_domain_for_dev() always gets a valid iommu domain even
intel_iommu_gfx_mapped == 0.

Yes it seems to work as is:

default:

# grep -i iommu /sys/kernel/debug/dri/*/i915_capabilities
/sys/kernel/debug/dri/0/i915_capabilities:iommu: enabled
/sys/kernel/debug/dri/1/i915_capabilities:iommu: enabled

intel_iommu=igfx_off:

# grep -i iommu /sys/kernel/debug/dri/*/i915_capabilities
/sys/kernel/debug/dri/0/i915_capabilities:iommu: disabled
/sys/kernel/debug/dri/1/i915_capabilities:iommu: enabled

On my system dri device 0 is integrated graphics and 1 is discrete.

The drm device 0 has a dedicated iommu. When the user request igfx not mapped, the VT-d implementation will turn it off to save power. But for
shared iommu, you definitely will get it enabled.

Sorry I am not following, what exactly do you mean? Is there a platform with integrated graphics without a dedicated iommu, in which case intel_iommu=igfx_off results in intel_iommu_gfx_mapped == 0 and iommu_get_domain_for_dev returning non-NULL?

Your code always work for an igfx with a dedicated iommu. This might be
always true on today's platforms. But from driver's point of view, we
should not make such assumption.

For example, if the iommu implementation decides not to turn off the
graphic iommu (perhaps due to some hw quirk or for graphic
virtualization), your code will be broken.

If I got it right, this would go back to your earlier recommendation to have the check look like this:

static bool intel_vtd_active(struct drm_i915_private *i915)
{
         struct iommu_domain *domain;

         domain = iommu_get_domain_for_dev(i915->drm.dev);
         if (domain && (domain->type & __IOMMU_DOMAIN_PAGING))
                 return true;
     ...

This would be okay as a first step?

Elsewhere in the thread Robin suggested looking at the dec->dma_ops and comparing against iommu_dma_ops. These two solution would be effectively the same?

Effectively, yes. See iommu_setup_dma_ops() - the only way to end up with iommu_dma_ops is if a managed translation domain is present; if the IOMMU is present but the default domain type has been set to passthrough (either globally or forced for the given device) it will do nothing and leave you with dma-direct, while if the IOMMU has been ignored entirely then it should never even be called. Thus it neatly encapsulates what you're after here.

One concern I have is whether the pass-through mode truly does nothing or addresses perhaps still go through the dmar hardware just with no translation?

Pass-through mode means the latter.


If latter then most like for like change is actually exactly what the first version of my patch did. That is replace intel_iommu_gfx_mapped with a plain non-NULL check on iommu_get_domain_for_dev.

Depends on what you want here,

#1) the graphic device works in iommu pass-through mode
    - device have an iommu
    - but iommu does no translation
    - the dma transactions go through iommu with the same destination
      memory address specified by the device;

Do you have any indications of the slowdown this adds?

#2) the graphic device works without a system iommu
    - the iommu is off
    - there's no iommu on the path of DMA transaction.

My suggestion works for #1). Robin's suggestion (device_iommu_mapped())
could work for #2).

On the question of what do I want here. It seems that to preserve like-for-like with the current and past i915 usage, ie. intel_iommu_gfx_mapped, the first version of my patch should be used.

In other words if I configure the boot with iommu=pt, then intel_iommu_gfx_mapped is true. So if I add the __IOMMU_DOMAIN_PAGING check, the new intel_vtd_active would return false, where the old version would return true.

So v1 of the patch feels like the safest route given I don't know which workarounds are due remapping slowdown, and which may be present even in the pass-through mode.

I would explain the situation in the comment inside intel_vtd_active for future reference.

Regards,

Tvrtko



[Index of Archives]     [AMD Graphics]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux