2013/6/25 Jerome Glisse <j.glisse@xxxxxxxxx>: > On Tue, Jun 25, 2013 at 10:17 AM, Inki Dae <daeinki@xxxxxxxxx> wrote: >> 2013/6/25 Rob Clark <robdclark@xxxxxxxxx>: >>> On Tue, Jun 25, 2013 at 5:09 AM, Inki Dae <daeinki@xxxxxxxxx> wrote: >>>>> that >>>>> should be the role of kernel memory management which of course needs >>>>> synchronization btw A and B. But in no case this should be done using >>>>> dma-buf. dma-buf is for sharing content btw different devices not >>>>> sharing resources. >>>>> >>>> >>>> hmm, is that true? And are you sure? Then how do you think about >>>> reservation? the reservation also uses dma-buf with same reason as long >>>> as I >>>> know: actually, we use reservation to use dma-buf. As you may know, a >>>> reservation object is allocated and initialized when a buffer object is >>>> exported to a dma buf. >>> >>> no, this is why the reservation object can be passed in when you >>> construction the dmabuf. >> >> Right, that way, we could use dma buf for buffer synchronization. I >> just wanted to ask for why Jerome said that "dma-buf is for sharing >> content btw different devices not sharing resources". > > From memory, the motivation of dma-buf was to done for few use case, > among them webcam capturing frame into a buffer and having gpu using > it directly without memcpy, or one big gpu rendering a scene into a > buffer that is then use by low power gpu for display ie it was done to > allow different device to operate on same data using same backing > memory. > > AFAICT you seem to want to use dma-buf to create scratch buffer, ie a > process needs to use X amount of memory for an operation, it can > release|free this memory once its done > and a process B can the use > this X memory for its own operation discarding content of process A. > presume that next frame would have the sequence repeat, process A do > something, then process B does its thing. > So to me it sounds like you > want to implement global scratch buffer using the dmabuf API and that > sounds bad to me. > > I know most closed driver have several pool of memory, long lived > object, short lived object and scratch space, then user space allocate > from one of this pool and there is synchronization done by driver > using driver specific API to reclaim memory. > Of course this work > nicely if you only talking about one logic block or at very least hw > that have one memory controller. > > Now if you are thinking of doing scratch buffer for several different > device and share the memory among then you need to be aware of > security implication, most obvious being that you don't want process B > being able to read process A scratch memory. > I know the argument about > it being graphic but one day this might become gpu code and it might > be able to insert jump to malicious gpu code. > If you think so, it seems like that there is *definitely* your misunderstanding. My approach is similar to dma fence: it guarantees that a DMA cannot access a buffer while other DMA is accessing the buffer. I guess now some gpu drivers in mainline have been using specific mechanism for it. And when it comes to the portion you commented, please know that I just introduced user side mechanism for buffer sychronization between CPU and CPU, and CPU and DMA in addition; not implemented but just planned. Thanks, Inki Dae > Cheers, > Jerome -- To unsubscribe from this list: send the line "unsubscribe linux-media" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html