Am 30.04.19 um 16:34 schrieb Daniel Vetter: > [CAUTION: External Email] > > On Tue, Apr 30, 2019 at 04:26:32PM +0200, Christian König wrote: >> Am 30.04.19 um 15:59 schrieb Daniel Vetter: >>> On Tue, Apr 30, 2019 at 03:42:22PM +0200, Christian König wrote: >>>> Am 29.04.19 um 10:40 schrieb Daniel Vetter: >>>>> On Fri, Apr 26, 2019 at 02:36:28PM +0200, Christian König wrote: >>>>>> [SNIP] >>>>>> +/** >>>>>> + * dma_buf_pin - Lock down the DMA-buf >>>>>> + * >>>>>> + * @dmabuf: [in] DMA-buf to lock down. >>>>>> + * >>>>>> + * Returns: >>>>>> + * 0 on success, negative error code on failure. >>>>>> + */ >>>>>> +int dma_buf_pin(struct dma_buf *dmabuf) >>>>> I think this should be on the attachment, not on the buffer. Or is the >>>>> idea that a pin is for the entire buffer, and all subsequent >>>>> dma_buf_map_attachment must work for all attachments? I think this matters >>>>> for sufficiently contrived p2p scenarios. >>>> This is indeed for the DMA-buf, cause we are pinning the underlying backing >>>> store and not just one attachment. >>> You can't move the buffer either way, so from that point there's no >>> difference. I more meant from an account/api point of view of whether it's >>> ok to pin a buffer you haven't even attached to yet. From the code it >>> seems like first you always want to attach, hence it would make sense to >>> put the pin/unpin onto the attachment. That might also help with >>> debugging, we could check whether everyone balances their pin/unpin >>> correctly (instead of just counting for the overall dma-buf). >>> >>> There's also a slight semantic difference between a pin on an attachment >>> (which would mean this attachment is always going to be mappable and wont >>> move around), whereas a pin on the entire dma-buf means the entire dma-buf >>> and all it's attachment must always be mappable. Otoh, global pin is >>> probably easier, you just need to check all current attachments again >>> whether they still work or whether you might need to move the buffer >>> around to a more suitable place (e.g. if you not all can do p2p it needs >>> to go into system ram before it's pinned). >>> >>> For the backing storage you only want one per-bo ->pinned_count, that's >>> clear, my suggestion was more about whether having more information about >>> who's pinning is useful. Exporters can always just ignore that added >>> information. >>> >>>> Key point is I want to call this in the exporter itself in the long run. >>>> E.g. that the exporter stops working with its internal special handling >>>> functions and uses this one instead. >>> Hm, not exactly following why the exporter needs to call >>> dma_buf_pin/unpin, instead of whatever is used to implement that. Or do >>> you mean that you want a ->pinned_count in dma_buf itself, so that there's >>> only one book-keeping for this? >> Yes, exactly that is one of the final goals of this. >> >> We could of course use the attachment here, but that would disqualify the >> exporter calling this directly without an attachment. > Hm exporter calling dma_buf_pin/unpin sounds like one seriously inverted > lasagna :-) > > I do understand the goal, all these imported/exporter special cases in > code are a bit silly, but I think the proper fix would be if you just > always import a buffer, even the private ones, allocated against some > exporter-only thing. Then it's always the same function to call. > > But I'm not sure this is a good reasons to guide the dma-buf interfaces > for everyone else. Moving pin to the attachment sounds like a better idea > (if the above is the only reason to keep it on the dma-buf). Yeah, that's on my mind as well. But I'm running into a chicken and egg problem here again. Basically we need to convert the drivers to do their internal operation using the DMA-buf as top level object first and then we can switch all internal operation to using a "normal" attachment. Additional to that it simply doesn't looks like the right idea to use the attachment as parameter here. After all we pin the underlying DMA-buf and NOT the attachment. Christian. > -Daniel > -- > Daniel Vetter > Software Engineer, Intel Corporation > http://blog.ffwll.ch _______________________________________________ dri-devel mailing list dri-devel@xxxxxxxxxxxxxxxxxxxxx https://lists.freedesktop.org/mailman/listinfo/dri-devel