On Thu, Dec 06, 2012 at 10:07:48AM -0800, Aaron Plattner wrote: > Instead of reimplementing all of the dma_buf functionality in every driver, > create helpers drm_prime_import and drm_prime_export that implement them in > terms of new, lower-level hook functions: > > gem_prime_pin: callback when a buffer is created, used to pin buffers into GTT > gem_prime_get_pages: convert a drm_gem_object to an sg_table for export > gem_prime_import_sg: convert an sg_table into a drm_gem_object > gem_prime_vmap, gem_prime_vunmap: map and unmap an object > > These hooks are optional; drivers can opt in by using drm_gem_prime_import and > drm_gem_prime_export as the .gem_prime_import and .gem_prime_export fields of > struct drm_driver. > > Signed-off-by: Aaron Plattner <aplattner@xxxxxxxxxx> A few comments: - Can you please add kerneldoc entries for these new helpers? Laurent Pinchart started a nice drm kerneldoc as an overview. And since then we've at least integrated the kerneldoc api reference at a nice place there and brushed it up a bit when touching drm core or helpers in a bigger patch series. See e.g. my recent dp helper series for what I'm looking for. I think we should have kerneldoc at least for the exported functions. - Just an idea for all the essential noop cpu helpers: Maybe just call them dma_buf*noop and shovel them as convenience helpers into the dma-buf.c code in the core, for drivers that don't want/can't/won't bother to implement the cpu access support. Related: Exporting the functions in general so that drivers could pick&choose - s/gem_prime_get_pages/gem_prime_get_sg/ drm/i915 now uses sg_tables everywhere to manage backing storage, and we're starting to use the stolen memory, too. Stolen memory is not struct page backed, so better make this clear in the function calls. I know that the current prime/dma_buf code from Dave doesn't really work too well (*cough*) if the backing storage isn't struct page backed, at least on the ttm import side. This also links in with my 2nd comment above - dma_buf requires that exporter map the sg into the importers device space to support fake subdevices which share the exporters iommu/gart, hence such incetious exporter might want to overwrite some of the functions. - To make it a first-class helper and remove any and all midlayer stints from this (I truly loath midlayers ...) maybe shovel this into a drm_prime_helper.c file. Also, adding functions to the core vtable for pure helpers is usually not too neat, since it makes it way too damn easy to mix things up and transform the helper into a midlayer by accident. E.g. the crtc helper functions all just use a generic void * pointer for their vtables, other helpers either subclass an existing struct or use their completely independent structs (which the driver can both embed into it's own object struct and then do the right upclassing in the callbacks). tbh haven't looked to closely at the series to asses what would make sense, but we have way too much gunk in the drm vtable already ... Dunno whether this aligns with what you want to achieve here. But on a quick hunch this code feels rather unflexible and just refactors the identical copypasta code back into one copy. Anyway, just figured I'll drop my 2 cents here, feel free to ignore (maybe safe for the last one). Cheers, Daniel > --- > drivers/gpu/drm/drm_prime.c | 190 +++++++++++++++++++++++++++++++++++++++++++- > include/drm/drmP.h | 15 ++++ > 2 files changed, 204 insertions(+), 1 deletion(-) > > diff --git a/drivers/gpu/drm/drm_prime.c b/drivers/gpu/drm/drm_prime.c > index 7f12573..566c2ac 100644 > --- a/drivers/gpu/drm/drm_prime.c > +++ b/drivers/gpu/drm/drm_prime.c > @@ -53,7 +53,8 @@ > * Self-importing: if userspace is using PRIME as a replacement for flink > * then it will get a fd->handle request for a GEM object that it created. > * Drivers should detect this situation and return back the gem object > - * from the dma-buf private. > + * from the dma-buf private. Prime will do this automatically for drivers that > + * use the drm_gem_prime_{import,export} helpers. > */ > > struct drm_prime_member { > @@ -62,6 +63,146 @@ struct drm_prime_member { > uint32_t handle; > }; > > +static struct sg_table *drm_gem_map_dma_buf(struct dma_buf_attachment *attach, > + enum dma_data_direction dir) > +{ > + struct drm_gem_object *obj = attach->dmabuf->priv; > + struct sg_table *st; > + > + mutex_lock_interruptible(&obj->dev->struct_mutex); > + > + st = obj->dev->driver->gem_prime_get_pages(obj); > + > + if (!IS_ERR_OR_NULL(st)) > + dma_map_sg(attach->dev, st->sgl, st->nents, dir); > + > + mutex_unlock(&obj->dev->struct_mutex); > + return st; > +} > + > +static void drm_gem_unmap_dma_buf(struct dma_buf_attachment *attach, > + struct sg_table *sg, enum dma_data_direction dir) > +{ > + dma_unmap_sg(attach->dev, sg->sgl, sg->nents, dir); > + sg_free_table(sg); > + kfree(sg); > +} > + > +static void drm_gem_dmabuf_release(struct dma_buf *dma_buf) > +{ > + struct drm_gem_object *obj = dma_buf->priv; > + > + if (obj->export_dma_buf == dma_buf) { > + /* drop the reference on the export fd holds */ > + obj->export_dma_buf = NULL; > + drm_gem_object_unreference_unlocked(obj); > + } > +} > + > +static void *drm_gem_dmabuf_vmap(struct dma_buf *dma_buf) > +{ > + struct drm_gem_object *obj = dma_buf->priv; > + struct drm_device *dev = obj->dev; > + void *virtual = ERR_PTR(-EINVAL); > + > + if (!dev->driver->gem_prime_vmap) > + return ERR_PTR(-EINVAL); > + > + mutex_lock(&dev->struct_mutex); > + if (obj->vmapping_count) { > + obj->vmapping_count++; > + virtual = obj->vmapping_ptr; > + goto out_unlock; > + } > + > + virtual = dev->driver->gem_prime_vmap(obj); > + if (IS_ERR(virtual)) { > + mutex_unlock(&dev->struct_mutex); > + return virtual; > + } > + obj->vmapping_count = 1; > + obj->vmapping_ptr = virtual; > +out_unlock: > + mutex_unlock(&dev->struct_mutex); > + return obj->vmapping_ptr; > +} > + > +static void drm_gem_dmabuf_vunmap(struct dma_buf *dma_buf, void *vaddr) > +{ > + struct drm_gem_object *obj = dma_buf->priv; > + struct drm_device *dev = obj->dev; > + > + mutex_lock(&dev->struct_mutex); > + obj->vmapping_count--; > + if (obj->vmapping_count == 0) { > + dev->driver->gem_prime_vunmap(obj); > + obj->vmapping_ptr = NULL; > + } > + mutex_unlock(&dev->struct_mutex); > +} > + > +static void *drm_gem_dmabuf_kmap_atomic(struct dma_buf *dma_buf, > + unsigned long page_num) > +{ > + return NULL; > +} > + > +static void drm_gem_dmabuf_kunmap_atomic(struct dma_buf *dma_buf, > + unsigned long page_num, void *addr) > +{ > + > +} > +static void *drm_gem_dmabuf_kmap(struct dma_buf *dma_buf, > + unsigned long page_num) > +{ > + return NULL; > +} > + > +static void drm_gem_dmabuf_kunmap(struct dma_buf *dma_buf, > + unsigned long page_num, void *addr) > +{ > + > +} > + > +static int drm_gem_dmabuf_mmap(struct dma_buf *dma_buf, > + struct vm_area_struct *vma) > +{ > + return -EINVAL; > +} > + > +static int drm_gem_begin_cpu_access(struct dma_buf *dma_buf, > + size_t start, size_t length, enum dma_data_direction direction) > +{ > + return -EINVAL; > +} > + > +static const struct dma_buf_ops drm_gem_prime_dmabuf_ops = { > + .map_dma_buf = drm_gem_map_dma_buf, > + .unmap_dma_buf = drm_gem_unmap_dma_buf, > + .release = drm_gem_dmabuf_release, > + .kmap = drm_gem_dmabuf_kmap, > + .kmap_atomic = drm_gem_dmabuf_kmap_atomic, > + .kunmap = drm_gem_dmabuf_kunmap, > + .kunmap_atomic = drm_gem_dmabuf_kunmap_atomic, > + .mmap = drm_gem_dmabuf_mmap, > + .vmap = drm_gem_dmabuf_vmap, > + .vunmap = drm_gem_dmabuf_vunmap, > + .begin_cpu_access = drm_gem_begin_cpu_access, > +}; > + > +struct dma_buf *drm_gem_prime_export(struct drm_device *dev, > + struct drm_gem_object *obj, int flags) > +{ > + if (dev->driver->gem_prime_pin) { > + int ret = dev->driver->gem_prime_pin(obj); > + if (ret) > + return ERR_PTR(ret); > + } > + return dma_buf_export(obj, &drm_gem_prime_dmabuf_ops, obj->size, > + 0600); > +} > +EXPORT_SYMBOL(drm_gem_prime_export); > + > int drm_gem_prime_handle_to_fd(struct drm_device *dev, > struct drm_file *file_priv, uint32_t handle, uint32_t flags, > int *prime_fd) > @@ -117,6 +258,53 @@ int drm_gem_prime_handle_to_fd(struct drm_device *dev, > } > EXPORT_SYMBOL(drm_gem_prime_handle_to_fd); > > +struct drm_gem_object *drm_gem_prime_import(struct drm_device *dev, > + struct dma_buf *dma_buf) > +{ > + struct dma_buf_attachment *attach; > + struct sg_table *sg; > + struct drm_gem_object *obj; > + int ret; > + > + if (!dev->driver->gem_prime_import_sg) > + return ERR_PTR(-EINVAL); > + > + if (dma_buf->ops == &drm_gem_prime_dmabuf_ops) { > + obj = dma_buf->priv; > + if (obj->dev == dev) { > + drm_gem_object_reference(obj); > + return obj; > + } > + } > + > + attach = dma_buf_attach(dma_buf, dev->dev); > + if (IS_ERR(attach)) > + return ERR_PTR(PTR_ERR(attach)); > + > + sg = dma_buf_map_attachment(attach, DMA_BIDIRECTIONAL); > + if (IS_ERR_OR_NULL(sg)) { > + ret = PTR_ERR(sg); > + goto fail_detach; > + } > + > + obj = dev->driver->gem_prime_import_sg(dev, dma_buf->size, sg); > + if (IS_ERR(obj)) { > + ret = PTR_ERR(obj); > + goto fail_unmap; > + } > + > + obj->import_attach = attach; > + > + return obj; > + > +fail_unmap: > + dma_buf_unmap_attachment(attach, sg, DMA_BIDIRECTIONAL); > +fail_detach: > + dma_buf_detach(dma_buf, attach); > + return ERR_PTR(ret); > +} > +EXPORT_SYMBOL(drm_gem_prime_import); > + > int drm_gem_prime_fd_to_handle(struct drm_device *dev, > struct drm_file *file_priv, int prime_fd, uint32_t *handle) > { > diff --git a/include/drm/drmP.h b/include/drm/drmP.h > index fad21c9..f65cae9 100644 > --- a/include/drm/drmP.h > +++ b/include/drm/drmP.h > @@ -674,6 +674,10 @@ struct drm_gem_object { > > /* dma buf attachment backing this object */ > struct dma_buf_attachment *import_attach; > + > + /* vmap information */ > + int vmapping_count; > + void *vmapping_ptr; > }; > > #include <drm/drm_crtc.h> > @@ -919,6 +923,13 @@ struct drm_driver { > /* import dmabuf -> GEM */ > struct drm_gem_object * (*gem_prime_import)(struct drm_device *dev, > struct dma_buf *dma_buf); > + /* low-level interface used by drm_gem_prime_{import,export} */ > + int (*gem_prime_pin)(struct drm_gem_object *obj); > + struct sg_table *(*gem_prime_get_pages)(struct drm_gem_object *obj); > + struct drm_gem_object *(*gem_prime_import_sg)(struct drm_device *dev, > + size_t size, struct sg_table *sg); > + void *(*gem_prime_vmap)(struct drm_gem_object *obj); > + void (*gem_prime_vunmap)(struct drm_gem_object *obj); > > /* vga arb irq handler */ > void (*vgaarb_irq)(struct drm_device *dev, bool state); > @@ -1540,9 +1551,13 @@ extern int drm_clients_info(struct seq_file *m, void* data); > extern int drm_gem_name_info(struct seq_file *m, void *data); > > > +extern struct dma_buf *drm_gem_prime_export(struct drm_device *dev, > + struct drm_gem_object *obj, int flags); > extern int drm_gem_prime_handle_to_fd(struct drm_device *dev, > struct drm_file *file_priv, uint32_t handle, uint32_t flags, > int *prime_fd); > +extern struct drm_gem_object *drm_gem_prime_import(struct drm_device *dev, > + struct dma_buf *dma_buf); > extern int drm_gem_prime_fd_to_handle(struct drm_device *dev, > struct drm_file *file_priv, int prime_fd, uint32_t *handle); > > -- > 1.8.0.1 > > _______________________________________________ > dri-devel mailing list > dri-devel@xxxxxxxxxxxxxxxxxxxxx > http://lists.freedesktop.org/mailman/listinfo/dri-devel -- Daniel Vetter Software Engineer, Intel Corporation +41 (0) 79 365 57 48 - http://blog.ffwll.ch _______________________________________________ dri-devel mailing list dri-devel@xxxxxxxxxxxxxxxxxxxxx http://lists.freedesktop.org/mailman/listinfo/dri-devel