Hi Michael, W dniu 21.11.2022 o 00:44, Michael Grzeschik pisze:
The comments before the vm_map_ram function state that it should be used for up to 256 KB only, and video buffers are definitely much larger. It recommends using vmap in that case.
The comment is: /** * vm_map_ram - map pages linearly into kernel virtual address (vmalloc space) * @pages: an array of pointers to the pages to be mapped * @count: number of pages * @node: prefer to allocate data structures on this node * * If you use this function for less than VMAP_MAX_ALLOC pages, it could be * faster than vmap so it's good. But if you mix long-life and short-life * objects with vm_map_ram(), it could consume lots of address space through * fragmentation (especially on a 32bit machine). You could see failures in * the end. Please use this function for short-lived objects. * * Returns: a pointer to the address that has been mapped, or %NULL on failure */ As far as I understand the comment means: - for allocations smaller than VMAP_MAX_ALLOC vm_map_ram() can be faster than vmap() - for larger allocations we don't know, maybe vmap() is faster, but the comment doesn't say that vm_map_ram() cannot be used - mixing long-life and short-life objects can have side effect of creating fragmentation (which ultimately can lead to failures) - the comment requests that the function is used for short-lived objects only (which maybe is not the same thing as "large objects") Can you expand your commit message? Regards, Andrzej
Signed-off-by: Michael Grzeschik <m.grzeschik@xxxxxxxxxxxxxx> --- drivers/media/common/videobuf2/videobuf2-dma-sg.c | 7 ++++--- 1 file changed, 4 insertions(+), 3 deletions(-) diff --git a/drivers/media/common/videobuf2/videobuf2-dma-sg.c b/drivers/media/common/videobuf2/videobuf2-dma-sg.c index dcb8de5ab3e84a..e86621fba350f3 100644 --- a/drivers/media/common/videobuf2/videobuf2-dma-sg.c +++ b/drivers/media/common/videobuf2/videobuf2-dma-sg.c @@ -188,7 +188,7 @@ static void vb2_dma_sg_put(void *buf_priv) dma_unmap_sgtable(buf->dev, sgt, buf->dma_dir, DMA_ATTR_SKIP_CPU_SYNC); if (buf->vaddr) - vm_unmap_ram(buf->vaddr, buf->num_pages); + vunmap(buf->vaddr); sg_free_table(buf->dma_sgt); while (--i >= 0) __free_page(buf->pages[i]); @@ -289,7 +289,7 @@ static void vb2_dma_sg_put_userptr(void *buf_priv) __func__, buf->num_pages); dma_unmap_sgtable(buf->dev, sgt, buf->dma_dir, DMA_ATTR_SKIP_CPU_SYNC); if (buf->vaddr) - vm_unmap_ram(buf->vaddr, buf->num_pages); + vunmap(buf->vaddr); sg_free_table(buf->dma_sgt); if (buf->dma_dir == DMA_FROM_DEVICE || buf->dma_dir == DMA_BIDIRECTIONAL) @@ -312,7 +312,8 @@ static void *vb2_dma_sg_vaddr(struct vb2_buffer *vb, void *buf_priv) ret = dma_buf_vmap(buf->db_attach->dmabuf, &map); buf->vaddr = ret ? NULL : map.vaddr; } else { - buf->vaddr = vm_map_ram(buf->pages, buf->num_pages, -1); + buf->vaddr = vmap(buf->pages, buf->num_pages, VM_MAP, + PAGE_KERNEL); } }