On 2021-12-07 9:59 a.m., Philip Yang wrote:
If host and amdgpu IOMMU is not enabled or IOMMU is pass through mode,
dma_map_page return address is equal to page physical address, use this
to set adev->iommu_isolation flag which will be used to optimize memory
usage for multi GPU mappings.
Signed-off-by: Philip Yang <Philip.Yang@xxxxxxx>
---
drivers/gpu/drm/amd/amdgpu/amdgpu.h | 2 ++
drivers/gpu/drm/amd/amdgpu/amdgpu_device.c | 27 ++++++++++++++++++++++
2 files changed, 29 insertions(+)
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu.h b/drivers/gpu/drm/amd/amdgpu/amdgpu.h
index c5cfe2926ca1..fbbe8c7b5d0c 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu.h
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu.h
@@ -1097,6 +1097,8 @@ struct amdgpu_device {
struct amdgpu_reset_control *reset_cntl;
uint32_t ip_versions[MAX_HWIP][HWIP_MAX_INSTANCE];
+
+ bool iommu_isolation;
};
static inline struct amdgpu_device *drm_to_adev(struct drm_device *ddev)
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
index 3c5afa45173c..6d0f3c477670 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
@@ -3364,6 +3364,31 @@ static int amdgpu_device_get_job_timeout_settings(struct amdgpu_device *adev)
return ret;
}
+/**
+ * amdgpu_device_check_iommu_isolation - check if IOMMU isolation is enabled
+ *
+ * @adev: amdgpu_device pointer
+ *
+ * device is in IOMMU isolation mode if dma_map_page return address is not equal
+ * to page physical address.
+ */
+static void amdgpu_device_check_iommu_isolation(struct amdgpu_device *adev)
+{
+ struct page *page;
+ dma_addr_t addr;
+
+ page = alloc_page(GFP_KERNEL);
+ if (!page)
+ return;
+ addr = dma_map_page(adev->dev, page, 0, PAGE_SIZE, DMA_BIDIRECTIONAL);
+ if (dma_mapping_error(adev->dev, addr))
+ goto out_free_page;
+ adev->iommu_isolation = (addr != page_to_phys(page));
+ dma_unmap_page(adev->dev, addr, PAGE_SIZE, DMA_BIDIRECTIONAL);
This is a bit of a hack. Unfortunately it seems there isn't a much
better way to do this. I guess you could copy the implementation of
dma_map_direct in kernel/dma/mapping.c, but that's more brittle.
I think this hack only tells you whether system memory is direct-mapped.
The answer may be different for peer VRAM (which isn't supported
upstream yet, but it's coming). I think this can happen when the IOMMU
is in pass-through mode by default but still used to DMA map physical
addresses that are outside the dma mask of the GPU. So a more future
proof way would be to store a direct-mapped flag for each GPU-GPU and
GPU-System pair somehow. For the GPU->GPU direct mapping flag you'd need
to try to DMA-map a page from one GPU's VRAM to the other device.
Anyway, that can be done in a later change.
For now I'd just change the name of the flag from iommu_isolation to
direct_map_ram or ram_is_direct_mapped or similar to be more specific
about what it means.
Regards,
Felix
+out_free_page:
+ __free_page(page);
+}
+
static const struct attribute *amdgpu_dev_attributes[] = {
&dev_attr_product_name.attr,
&dev_attr_product_number.attr,
@@ -3767,6 +3792,8 @@ int amdgpu_device_init(struct amdgpu_device *adev,
queue_delayed_work(system_wq, &mgpu_info.delayed_reset_work,
msecs_to_jiffies(AMDGPU_RESUME_MS));
+ amdgpu_device_check_iommu_isolation(adev);
+
return 0;
release_ras_con: