On Tue, Feb 3, 2015 at 11:58 AM, Russell King - ARM Linux <linux@xxxxxxxxxxxxxxxx> wrote: > > Okay, but switching contexts is not something which the DMA API has > any knowledge of (so it can't know which context to associate with > which mapping.) While it knows which device, it has no knowledge > (nor is there any way for it to gain knowledge) about contexts. > > My personal view is that extending the DMA API in this way feels quite > dirty - it's a violation of the DMA API design, which is to (a) demark > the buffer ownership between CPU and DMA agent, and (b) to translate > buffer locations into a cookie which device drivers can use to instruct > their device to access that memory. To see why, consider... that you > map a buffer to a device in context A, and then you switch to context B, > which means the dma_addr_t given previously is no longer valid. You > then try to unmap it... which is normally done using the (now no longer > valid) dma_addr_t. > > It seems to me that to support this at DMA API level, we would need to > completely revamp the DMA API, which IMHO isn't going to be nice. (It > would mean that we end up with three APIs - the original PCI DMA API, > the existing DMA API, and some new DMA API.) > > Do we have any views on how common this feature is? > I can't think of cases outside of GPU's.. if it were more common I'd be in favor of teaching dma api about multiple contexts, but right now I think that would just amount to forcing a lot of churn on everyone else for the benefit of GPU's. IMHO it makes more sense for GPU drivers to bypass the dma api if they need to. Plus, sooner or later, someone will discover that with some trick or optimization they can get moar fps, but the extra layer of abstraction will just be getting in the way. BR, -R _______________________________________________ dri-devel mailing list dri-devel@xxxxxxxxxxxxxxxxxxxxx http://lists.freedesktop.org/mailman/listinfo/dri-devel