On Wed, Dec 12, 2018 at 06:34:25PM +0900, Tomasz Figa wrote: > The typical DMA-buf import/export flow is as follows: > 1) Driver X allocates buffer A using this API for device x and gets a > DMA address inside x's DMA (or IOVA) address space. > 2) Driver X creates a dma_buf D(A), backed by buffer A and gives the > user space process a file descriptor FD(A) referring to it. > 3) Driver Y gets FD(A) from the user space and needs to map it into > the DMA/IOVA address space of device y. It doe it by calling > dma_buf_map_attachment() which returns an sg_table describing the > mapping. And just as I said last time I think we need to fix the dmabuf code to not rely on struct scatterlist. struct scatterlist is an interface that is fundamentally page based, while the dma coherent allocator only gives your a kernel virtual and dma address (and the option to map the buffer into userspace). So we need to fix to get the interface right as we already have DMAable memory withour a struct page and we are bound to get more of those. Nevermind all the caching implications even if we have a struct page. It would also be great to use that opportunity to get rid of all the code duplication of almost the same dmabug provides backed by the DMA API.