The 'offset' data member of the 'struct scatterlist' denotes the offset into a SG entry in bytes. The sg_dma_len() macro could be used to get lengths of SG entries, those lengths are expected to be CPU page size aligned. Since, at least for now, we call drm_prime_pages_to_sg() to convert our various page array into an SG list. We pass the number of CPU page as the third argoument, to tell the size of the backing memory of GEM buffer object. drm_prime_pages_to_sg() call sg_alloc_table_from_pages_segment() do the work, sg_alloc_table_from_pages_segment() always hardcode the Offset to ZERO. The sizes of *all* SG enties will be a multiple of CPU page size, that is multiple of PAGE_SIZE. If the GPU want to map/unmap a bigger page partially, we should use 'sg_dma_address(sg) + sg->offset' to calculate the destination DMA address, and the size to be map/unmap is 'sg_dma_len(sg) - sg->offset'. While the current implement is wrong, but since the 'sg->offset' is alway equal to 0, drm/etnaviv works in practice by good luck. Fix this, to make it looks right at least from the perspective of concept. while at it, always fix the absue types: - sg_dma_address returns DMA address, the type is dma_addr_t, not the phys_addr_t, for VRAM there may have another translation between the bus address and the final physical address of VRAM or carved out RAM. - The type of sg_dma_len(sg) return is unsigned int, not the size_t. Avoid hint the compiler to do unnecessary integer promotion. Signed-off-by: Sui Jingfeng <sui.jingfeng@xxxxxxxxx> --- drivers/gpu/drm/etnaviv/etnaviv_mmu.c | 10 +++++----- 1 file changed, 5 insertions(+), 5 deletions(-) diff --git a/drivers/gpu/drm/etnaviv/etnaviv_mmu.c b/drivers/gpu/drm/etnaviv/etnaviv_mmu.c index 1661d589bf3e..4ee9ed96b1d8 100644 --- a/drivers/gpu/drm/etnaviv/etnaviv_mmu.c +++ b/drivers/gpu/drm/etnaviv/etnaviv_mmu.c @@ -80,10 +80,10 @@ static int etnaviv_iommu_map(struct etnaviv_iommu_context *context, u32 iova, return -EINVAL; for_each_sgtable_dma_sg(sgt, sg, i) { - phys_addr_t pa = sg_dma_address(sg) - sg->offset; - size_t bytes = sg_dma_len(sg) + sg->offset; + dma_addr_t pa = sg_dma_address(sg) + sg->offset; + unsigned int bytes = sg_dma_len(sg) - sg->offset; - VERB("map[%d]: %08x %pap(%zx)", i, iova, &pa, bytes); + VERB("map[%d]: %08x %pap(%x)", i, iova, &pa, bytes); ret = etnaviv_context_map(context, da, pa, bytes, prot); if (ret) @@ -109,11 +109,11 @@ static void etnaviv_iommu_unmap(struct etnaviv_iommu_context *context, u32 iova, int i; for_each_sgtable_dma_sg(sgt, sg, i) { - size_t bytes = sg_dma_len(sg) + sg->offset; + unsigned int bytes = sg_dma_len(sg) - sg->offset; etnaviv_context_unmap(context, da, bytes); - VERB("unmap[%d]: %08x(%zx)", i, iova, bytes); + VERB("unmap[%d]: %08x(%x)", i, iova, bytes); BUG_ON(!PAGE_ALIGNED(bytes)); -- 2.34.1