On Wed, Dec 12, 2018 at 10:54 PM Christoph Hellwig <hch@xxxxxxxxxxxxx> wrote: > > On Wed, Dec 12, 2018 at 06:34:25PM +0900, Tomasz Figa wrote: > > The typical DMA-buf import/export flow is as follows: > > 1) Driver X allocates buffer A using this API for device x and gets a > > DMA address inside x's DMA (or IOVA) address space. > > 2) Driver X creates a dma_buf D(A), backed by buffer A and gives the > > user space process a file descriptor FD(A) referring to it. > > 3) Driver Y gets FD(A) from the user space and needs to map it into > > the DMA/IOVA address space of device y. It doe it by calling > > dma_buf_map_attachment() which returns an sg_table describing the > > mapping. > > And just as I said last time I think we need to fix the dmabuf code > to not rely on struct scatterlist. struct scatterlist is an interface > that is fundamentally page based, while the dma coherent allocator > only gives your a kernel virtual and dma address (and the option to > map the buffer into userspace). So we need to fix to get the interface > right as we already have DMAable memory withour a struct page and we > are bound to get more of those. Nevermind all the caching implications > even if we have a struct page. Putting aside the problem of memory without struct page, one thing to note here that what is a contiguous DMA range for device X, may not be mappable contiguously for device Y and it would still need something like a scatter list to fully describe the buffer. Do we already have a structure that would work for this purposes? I'd assume that we need something like the existing scatterlist but with page links replaced with something that doesn't require the memory to have struct page, possibly just PFN? > > It would also be great to use that opportunity to get rid of all the > code duplication of almost the same dmabug provides backed by the > DMA API. Could you sched some more light on this? I'm curious what is the code duplication you're referring to. Best regards, Tomasz