Hi Sascha, Thank you for the patch. I've successfully tested the helper with the new SH Mobile DRM driver. Just a couple of comments below in addition to Lars' comments (this is not a full review, just details that caught my attention when comparing the code with my implementation, and trying to use it). On Tuesday 29 May 2012 16:10:27 Sascha Hauer wrote: > Many embedded drm devices do not have a IOMMU and no dedicated > memory for graphics. These devices use CMA (Contiguous Memory > Allocator) backed graphics memory. This patch provides helper > functions to be able to share the code. > > Signed-off-by: Sascha Hauer <s.hauer@xxxxxxxxxxxxxx> > --- > > Lars-Peter, please let me know if this fits your needs or if you are missing > something. > > drivers/gpu/drm/Kconfig | 6 + > drivers/gpu/drm/Makefile | 1 + > drivers/gpu/drm/drm_gem_cma_helper.c | 321 +++++++++++++++++++++++++++++++ > include/drm/drm_gem_cma_helper.h | 47 +++++ > 4 files changed, 375 insertions(+) > create mode 100644 drivers/gpu/drm/drm_gem_cma_helper.c > create mode 100644 include/drm/drm_gem_cma_helper.h [snip] > diff --git a/drivers/gpu/drm/drm_gem_cma_helper.c > b/drivers/gpu/drm/drm_gem_cma_helper.c new file mode 100644 > index 0000000..75534ff > --- /dev/null > +++ b/drivers/gpu/drm/drm_gem_cma_helper.c > @@ -0,0 +1,321 @@ [snip] > +/* > + * drm_gem_cma_create - allocate an object with the given size > + * > + * returns a struct drm_gem_cma_obj* on success or ERR_PTR values > + * on failure. > + */ > +struct drm_gem_cma_obj *drm_gem_cma_create(struct drm_device *drm, > + unsigned int size) > +{ > + struct drm_gem_cma_obj *cma_obj; > + struct drm_gem_object *gem_obj; > + int ret; > + > + size = roundup(size, PAGE_SIZE); As PAGE_SIZE is guaranteed to be a power of two, you can use round_up, which should be faster. > + > + cma_obj = kzalloc(sizeof(*cma_obj), GFP_KERNEL); > + if (!cma_obj) > + return ERR_PTR(-ENOMEM); > + > + cma_obj->vaddr = dma_alloc_writecombine(drm->dev, size, > + (dma_addr_t *)&cma_obj->paddr, > + GFP_KERNEL | __GFP_NOWARN); > + if (!cma_obj->vaddr) { > + dev_err(drm->dev, "failed to allocate buffer with size %d\n", size); > + ret = -ENOMEM; > + goto err_dma_alloc; > + } > + > + gem_obj = &cma_obj->base; > + > + ret = drm_gem_object_init(drm, gem_obj, size); > + if (ret) > + goto err_obj_init; > + > + ret = drm_gem_create_mmap_offset(gem_obj); > + if (ret) > + goto err_create_mmap_offset; > + > + return cma_obj; > + > +err_create_mmap_offset: > + drm_gem_object_release(gem_obj); > + > +err_obj_init: > + drm_gem_cma_buf_destroy(drm, cma_obj); > + > +err_dma_alloc: > + kfree(cma_obj); > + > + return ERR_PTR(ret); > +} > +EXPORT_SYMBOL_GPL(drm_gem_cma_create); [snip] > +/* > + * drm_gem_cma_mmap - (struct file_operation)->mmap callback function > + */ > +int drm_gem_cma_mmap(struct file *filp, struct vm_area_struct *vma) > +{ > + int ret; > + > + ret = drm_gem_mmap(filp, vma); > + if (ret) > + return ret; > + > + vma->vm_flags &= ~VM_PFNMAP; > + vma->vm_flags |= VM_MIXEDMAP; Why is this a mixed map ? > + > + return ret; > +} > +EXPORT_SYMBOL_GPL(drm_gem_cma_mmap); My implementation maps the whole buffer in one go at mmap time, not page by page at page fault time. Isn't that more efficient when mapping frame buffer memory ? [snip] > diff --git a/include/drm/drm_gem_cma_helper.h > b/include/drm/drm_gem_cma_helper.h new file mode 100644 > index 0000000..53b007c > --- /dev/null > +++ b/include/drm/drm_gem_cma_helper.h > @@ -0,0 +1,47 @@ > +#ifndef __DRM_GEM_DMA_ALLOC_HELPER_H__ > +#define __DRM_GEM_DMA_ALLOC_HELPER_H__ Should this be __DRM_GEM_CMA_HELPER_H__ ? > + > +struct drm_gem_cma_obj { > + struct drm_gem_object base; > + dma_addr_t paddr; > + void __iomem *vaddr; > +}; All drm objects end with _object, would it be better to rename this to struct drm_gem_cma_object ? -- Regards, Laurent Pinchart _______________________________________________ dri-devel mailing list dri-devel@xxxxxxxxxxxxxxxxxxxxx http://lists.freedesktop.org/mailman/listinfo/dri-devel