Hi,
On 2024/10/1 20:17, Lucas Stach wrote:
Hi Xiaolei,
Am Dienstag, dem 03.09.2024 um 10:08 +0800 schrieb Xiaolei Wang:
Remove __GFP_HIGHMEM when requesting a page from DMA32 zone,
and since all vivante GPUs in the system will share the same
DMA constraints, move the check of whether to get a page from
DMA32 to etnaviv_bind().
Fixes: b72af445cd38 ("drm/etnaviv: request pages from DMA32 zone when needed")
Suggested-by: Sui Jingfeng <sui.jingfeng@xxxxxxxxx>
Signed-off-by: Xiaolei Wang <xiaolei.wang@xxxxxxxxxxxxx>
---
change log
v1:
https://patchwork.kernel.org/project/dri-devel/patch/20240806104733.2018783-1-xiaolei.wang@xxxxxxxxxxxxx/
v2:
Modify the issue of not retaining GFP_USER in v1 and update the commit log.
v3:
Use "priv->shm_gfp_mask = GFP_USER | __GFP_RETRY_MAYFAIL | __GFP_NOWARN;"
instead of
"priv->shm_gfp_mask = GFP_HIGHUSER | __GFP_RETRY_MAYFAIL | __GFP_NOWARN;"
I don't understand this part of the changes in the new version. Why
should we drop the HIGHMEM bit always and not only in the case where
dma addressing is limited? This seems overly restrictive.
While reading the implementation of the dma_alloc_attrs() function,
except allocate memory from device coherent pool, the rest implementation
just mask out __GFP_DMA, __GFP_DMA32 and __GFP_HIGHMEM anyway.
So ?
```
void *dma_alloc_attrs(struct device *dev, size_t size, dma_addr_t *dma_handle,
gfp_t flag, unsigned long attrs)
{
const struct dma_map_ops *ops = get_dma_ops(dev);
void *cpu_addr;
WARN_ON_ONCE(!dev->coherent_dma_mask);
/*
* DMA allocations can never be turned back into a page pointer, so
* requesting compound pages doesn't make sense (and can't even be
* supported at all by various backends).
*/
if (WARN_ON_ONCE(flag & __GFP_COMP))
return NULL;
if (dma_alloc_from_dev_coherent(dev, size, dma_handle, &cpu_addr))
return cpu_addr;
/* let the implementation decide on the zone to allocate from: */
flag &= ~(__GFP_DMA | __GFP_DMA32 | __GFP_HIGHMEM);
if (dma_alloc_direct(dev, ops))
cpu_addr = dma_direct_alloc(dev, size, dma_handle, flag, attrs);
else if (use_dma_iommu(dev))
cpu_addr = iommu_dma_alloc(dev, size, dma_handle, flag, attrs);
else if (ops->alloc)
cpu_addr = ops->alloc(dev, size, dma_handle, flag, attrs);
else
return NULL;
return cpu_addr;
}
```
Regards,
Lucas
and move the check of whether to get a page from DMA32 to etnaviv_bind().
drivers/gpu/drm/etnaviv/etnaviv_drv.c | 10 +++++++++-
drivers/gpu/drm/etnaviv/etnaviv_gpu.c | 8 --------
2 files changed, 9 insertions(+), 9 deletions(-)
diff --git a/drivers/gpu/drm/etnaviv/etnaviv_drv.c b/drivers/gpu/drm/etnaviv/etnaviv_drv.c
index 6500f3999c5f..8cb2c3ec8e5d 100644
--- a/drivers/gpu/drm/etnaviv/etnaviv_drv.c
+++ b/drivers/gpu/drm/etnaviv/etnaviv_drv.c
@@ -536,7 +536,15 @@ static int etnaviv_bind(struct device *dev)
mutex_init(&priv->gem_lock);
INIT_LIST_HEAD(&priv->gem_list);
priv->num_gpus = 0;
- priv->shm_gfp_mask = GFP_HIGHUSER | __GFP_RETRY_MAYFAIL | __GFP_NOWARN;
+ priv->shm_gfp_mask = GFP_USER | __GFP_RETRY_MAYFAIL | __GFP_NOWARN;
+
+ /*
+ * If the GPU is part of a system with DMA addressing limitations,
+ * request pages for our SHM backend buffers from the DMA32 zone to
+ * hopefully avoid performance killing SWIOTLB bounce buffering.
+ */
+ if (dma_addressing_limited(dev))
+ priv->shm_gfp_mask |= GFP_DMA32;
priv->cmdbuf_suballoc = etnaviv_cmdbuf_suballoc_new(drm->dev);
if (IS_ERR(priv->cmdbuf_suballoc)) {
diff --git a/drivers/gpu/drm/etnaviv/etnaviv_gpu.c b/drivers/gpu/drm/etnaviv/etnaviv_gpu.c
index 7c7f97793ddd..5e753dd42f72 100644
--- a/drivers/gpu/drm/etnaviv/etnaviv_gpu.c
+++ b/drivers/gpu/drm/etnaviv/etnaviv_gpu.c
@@ -839,14 +839,6 @@ int etnaviv_gpu_init(struct etnaviv_gpu *gpu)
if (ret)
goto fail;
- /*
- * If the GPU is part of a system with DMA addressing limitations,
- * request pages for our SHM backend buffers from the DMA32 zone to
- * hopefully avoid performance killing SWIOTLB bounce buffering.
- */
- if (dma_addressing_limited(gpu->dev))
- priv->shm_gfp_mask |= GFP_DMA32;
-
/* Create buffer: */
ret = etnaviv_cmdbuf_init(priv->cmdbuf_suballoc, &gpu->buffer,
PAGE_SIZE);
--
Best regards,
Sui