Quoting Christian König (2018-03-16 13:20:45) > @@ -326,6 +338,29 @@ struct dma_buf_attachment { > struct device *dev; > struct list_head node; > void *priv; > + > + /** > + * @invalidate_mappings: > + * > + * Optional callback provided by the importer of the attachment which > + * must be set before mappings are created. > + * > + * If provided the exporter can avoid pinning the backing store while > + * mappings exists. Hmm, no I don't think it avoids the pinning issue entirely. As it stands, the importer doesn't have a page refcount and so they all rely on the exporter keeping the dmabuf pages pinned while attached. What can happen is that given the invalidate cb, the importers can revoke their attachments, letting the exporter recover the pages/sg, and then start again from scratch. That also neatly answers what happens if not all importers provide an invalidate cb, or fail, the dmabuf remains pinned and the exporter must retreat. > + * > + * The function is called with the lock of the reservation object > + * associated with the dma_buf held and the mapping function must be > + * called with this lock held as well. This makes sure that no mapping > + * is created concurrently with an ongoing invalidation. > + * > + * After the callback all existing mappings are still valid until all > + * fences in the dma_bufs reservation object are signaled, but should be > + * destroyed by the importer as soon as possible. > + * > + * New mappings can be created immediately, but can't be used before the > + * exclusive fence in the dma_bufs reservation object is signaled. > + */ > + void (*invalidate_mappings)(struct dma_buf_attachment *attach); The intent is that the invalidate is synchronous and immediate, while locked? We are looking at recursing back into the dma_buf functions to remove each attachment from the invalidate cb (as well as waiting for dma), won't that cause some nasty issues? -Chris