Re: [PATCH 3/4] drm/exynos: added userptr feature.

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Wed, May 16, 2012 at 12:04 AM, Inki Dae <inki.dae@xxxxxxxxxxx> wrote:
>
>
>> -----Original Message-----
>> From: robdclark@xxxxxxxxx [mailto:robdclark@xxxxxxxxx] On Behalf Of Rob
>> Clark
>> Sent: Tuesday, May 15, 2012 11:29 PM
>> To: InKi Dae
>> Cc: Inki Dae; kyungmin.park@xxxxxxxxxxx; sw0312.kim@xxxxxxxxxxx; dri-
>> devel@xxxxxxxxxxxxxxxxxxxxx
>> Subject: Re: [PATCH 3/4] drm/exynos: added userptr feature.
>>
>> On Tue, May 15, 2012 at 7:40 AM, InKi Dae <daeinki@xxxxxxxxx> wrote:
>> > 2012/5/15 Rob Clark <rob.clark@xxxxxxxxxx>:
>> >> On Tue, May 15, 2012 at 2:17 AM, Inki Dae <inki.dae@xxxxxxxxxxx> wrote:
>> >>> Hi Rob,
>> >>>
>> >>>> -----Original Message-----
>> >>>> From: robdclark@xxxxxxxxx [mailto:robdclark@xxxxxxxxx] On Behalf Of
>> Rob
>> >>>> Clark
>> >>>> Sent: Tuesday, May 15, 2012 4:35 PM
>> >>>> To: Inki Dae
>> >>>> Cc: airlied@xxxxxxxx; dri-devel@xxxxxxxxxxxxxxxxxxxxx;
>> >>>> kyungmin.park@xxxxxxxxxxx; sw0312.kim@xxxxxxxxxxx
>> >>>> Subject: Re: [PATCH 3/4] drm/exynos: added userptr feature.
>> >>>>
>> >>>> On Mon, Apr 23, 2012 at 7:43 AM, Inki Dae <inki.dae@xxxxxxxxxxx>
>> wrote:
>> >>>> > this feature could be used to use memory region allocated by
>> malloc() in
>> >>>> user
>> >>>> > mode and mmaped memory region allocated by other memory allocators.
>> >>>> userptr
>> >>>> > interface can identify memory type through vm_flags value and would
>> get
>> >>>> > pages or page frame numbers to user space appropriately.
>> >>>>
>> >>>> I apologize for being a little late to jump in on this thread, but...
>> >>>>
>> >>>> I must confess to not being a huge fan of userptr.  It really is
>> >>>> opening a can of worms, and seems best avoided if at all possible.
>> >>>> I'm not entirely sure the use-case for which you require this, but I
>> >>>> wonder if there isn't an alternative way?   I mean, the main case I
>> >>>> could think of for mapping userspace memory would be something like
>> >>>> texture upload.  But could that be handled in an alternative way,
>> >>>> something like a pwrite or texture_upload API, which could
>> temporarily
>> >>>> pin the userspace memory, kick off a dma from that user buffer to a
>> >>>> proper GEM buffer, and then unpin the user buffer when the DMA
>> >>>> completes?
>> >>>>
>> >>>
>> >>> This feature have being discussed with drm and linux-mm guys and I
>> have
>> >>> posted this patch four times.
>> >>> So please, see below e-mail threads.
>> >>> http://www.spinics.net/lists/dri-devel/msg22729.html
>> >>>
>> >>> and then please, give me your opinions.
>> >>
>> >>
>> >> Yeah, I read that and understand that the purpose is to support
>> >> malloc()'d memory (and btw, all the other changes like limits to the
>> >> amount of userptr buffers are good if we do decide to support userptr
>> >> buffers).  But my question was more about why do you need to support
>> >> malloc'd buffers... other than "just because v4l2 does" ;-)
>> >>
>> >> I don't really like the userptr feature in v4l2 either, and view it as
>> >> a necessary evil because at the time there was nothing like dmabuf.
>> >> But now that we have dmabuf to share buffers with zero copy between
>> >> two devices, to we *really* still need userptr?
>> >>
>> >
>> > Definitely no, as I mentioned on this email thread, we are using the
>> > userptr feature for pixman and evas
>> > backend to use gpu acceleration directly(zero-copy). as you may know,
>> > Evas is part of the Enlightenment Foundation Libraries(EFL) and
>> > Elementary for pixman. all the applicaions based on them uses user
>> > address to draw something so the backends(such as gpu accelerators)
>> > can refer to only the user address to the buffer rendered. do you
>> > think we can avoid memory copy to use those GPUs without such userptr
>> > feature? I think there is no way and only the way that the gpu uses
>> > the user address to render without memory copy. and as mentioned on
>> > this thread, this feature had been tried by i915 in the desktop world.
>> > for this, you can refer to Daniel's comments.
>>
>> oh, yeah, for something like pixman it is hard to accelerate.  But I
>> wonder if there is still some benefit.  I don't know as much about
>> evas, but pixman is assumed to be a synchronous API, so you must
>> always block until the operation completes which looses a lot of the
>> potential benefit of a blitter.  And also I think you have no control
>> over when the client free()'s a buffer.  So you end up having to
>> map/unmap the buffer to the gpu/blitter/whatever every single
>> operation.  Not to mention possible need for cache operations, etc.  I
>> wonder if trying to accel it in hw really brings any benefit vs just
>> using NEON?
>>
>
> Yes, actually, we are using NEON backend of PIXMAN for best performance
> also. like this, PIXMAN's NEON backend for small image and 2D GPU backend of
> EXA for large image because NEON is faster than 2d gpu for small image but
> not for large image.
>
>> I thought EFL has a GLES backend..  I could be wrong about that, but
>> if it does probably you'd get better performance by using the GLES
>> backend vs trying to accelerate what is intended as a sw path..
>>
> Right, EFL has a GLES backend and also PIXMAN for software path. application
> using Elementary can use PIXMAN for software path and also GLES for hardware
> acceleration. so the purpose of using userptr feature is for buffer drawn by
> any user using Evas to be rendered by gpu hardware.


hmm, I don't suppose it is primarily a one-way transfer (ie. like one
time blit from sw rendered surface to texture)?  I'd be more
comfortable w/ an API that somehow just created a temporary mapping
that unpins the pages before the syscall returns, ie. like a
EXYNOS_IOCTL_BLIT_FROM_USERPTR type interface, which does some blit
from a user ptr to a GEM bo.

BR,
-R

>
> Thanks,
> Inki Dae
>
>> > and as you know, note that we are already using dmabuf to share a
>> > buffer between device drivers or v4l2 and drm framework. for this, we
>> > also tested drm prime feature with v4l2 world and we applied umm
>> > concept to ump for 3d gpu(known as mali) also.
>> >
>> > Anyway, this userptr feature is different from your thought. and maybe
>> > you also need such feature for performance enhancement. Please give me
>> > any idea if you have any good way instead of this userptr fearure.
>> > there may be any good idea I don't know.
>>
>> well, I just try and point people to APIs that are actually designed
>> with hw acceleration in mind in the first place (x11/EXA, GLES, etc)
>> ;-)
>>
>> There missing really an API for client side 2d accel.  Well, I guess
>> there is OpenVG, although I'm not sure if anyone actually ever used
>> it.. maybe we do need something for better supporting 2d blitters in
>> weston compositor, for example.
>>
>> BR,
>> -R
>>
>>
>> > Thanks,
>> > Inki Dae
>> >
>> >> I guess if I understood better the use-case, maybe I could try to
>> >> think of some alternatives.
>> >>
>> >> BR,
>> >> -R
>> >>
>> >>> Thanks,
>> >>> Inki Dae
>> >>>
>> >>>> BR,
>> >>>> -R
>> >>>>
>> >>>> > Signed-off-by: Inki Dae <inki.dae@xxxxxxxxxxx>
>> >>>> > Signed-off-by: Kyungmin Park <kyungmin.park@xxxxxxxxxxx>
>> >>>> > ---
>> >>>> >  drivers/gpu/drm/exynos/exynos_drm_drv.c |    2 +
>> >>>> >  drivers/gpu/drm/exynos/exynos_drm_gem.c |  258
>> >>>> +++++++++++++++++++++++++++++++
>> >>>> >  drivers/gpu/drm/exynos/exynos_drm_gem.h |   13 ++-
>> >>>> >  include/drm/exynos_drm.h                |   25 +++-
>> >>>> >  4 files changed, 296 insertions(+), 2 deletions(-)
>> >>>> >
>> >>>> > diff --git a/drivers/gpu/drm/exynos/exynos_drm_drv.c
>> >>>> b/drivers/gpu/drm/exynos/exynos_drm_drv.c
>> >>>> > index f58a487..5bb0361 100644
>> >>>> > --- a/drivers/gpu/drm/exynos/exynos_drm_drv.c
>> >>>> > +++ b/drivers/gpu/drm/exynos/exynos_drm_drv.c
>> >>>> > @@ -211,6 +211,8 @@ static struct drm_ioctl_desc exynos_ioctls[] =
>> {
>> >>>> >                        DRM_AUTH),
>> >>>> >        DRM_IOCTL_DEF_DRV(EXYNOS_GEM_MMAP,
>> >>>> >                        exynos_drm_gem_mmap_ioctl, DRM_UNLOCKED |
>> >>> DRM_AUTH),
>> >>>> > +       DRM_IOCTL_DEF_DRV(EXYNOS_GEM_USERPTR,
>> >>>> > +                       exynos_drm_gem_userptr_ioctl,
> DRM_UNLOCKED),
>> >>>> >        DRM_IOCTL_DEF_DRV(EXYNOS_PLANE_SET_ZPOS,
>> >>>> exynos_plane_set_zpos_ioctl,
>> >>>> >                        DRM_UNLOCKED | DRM_AUTH),
>> >>>> >        DRM_IOCTL_DEF_DRV(EXYNOS_VIDI_CONNECTION,
>> >>>> > diff --git a/drivers/gpu/drm/exynos/exynos_drm_gem.c
>> >>>> b/drivers/gpu/drm/exynos/exynos_drm_gem.c
>> >>>> > index afd0cd4..b68d4ea 100644
>> >>>> > --- a/drivers/gpu/drm/exynos/exynos_drm_gem.c
>> >>>> > +++ b/drivers/gpu/drm/exynos/exynos_drm_gem.c
>> >>>> > @@ -66,6 +66,43 @@ static int check_gem_flags(unsigned int flags)
>> >>>> >        return 0;
>> >>>> >  }
>> >>>> >
>> >>>> > +static struct vm_area_struct *get_vma(struct vm_area_struct *vma)
>> >>>> > +{
>> >>>> > +       struct vm_area_struct *vma_copy;
>> >>>> > +
>> >>>> > +       vma_copy = kmalloc(sizeof(*vma_copy), GFP_KERNEL);
>> >>>> > +       if (!vma_copy)
>> >>>> > +               return NULL;
>> >>>> > +
>> >>>> > +       if (vma->vm_ops && vma->vm_ops->open)
>> >>>> > +               vma->vm_ops->open(vma);
>> >>>> > +
>> >>>> > +       if (vma->vm_file)
>> >>>> > +               get_file(vma->vm_file);
>> >>>> > +
>> >>>> > +       memcpy(vma_copy, vma, sizeof(*vma));
>> >>>> > +
>> >>>> > +       vma_copy->vm_mm = NULL;
>> >>>> > +       vma_copy->vm_next = NULL;
>> >>>> > +       vma_copy->vm_prev = NULL;
>> >>>> > +
>> >>>> > +       return vma_copy;
>> >>>> > +}
>> >>>> > +
>> >>>> > +static void put_vma(struct vm_area_struct *vma)
>> >>>> > +{
>> >>>> > +       if (!vma)
>> >>>> > +               return;
>> >>>> > +
>> >>>> > +       if (vma->vm_ops && vma->vm_ops->close)
>> >>>> > +               vma->vm_ops->close(vma);
>> >>>> > +
>> >>>> > +       if (vma->vm_file)
>> >>>> > +               fput(vma->vm_file);
>> >>>> > +
>> >>>> > +       kfree(vma);
>> >>>> > +}
>> >>>> > +
>> >>>> >  static void update_vm_cache_attr(struct exynos_drm_gem_obj *obj,
>> >>>> >                                        struct vm_area_struct *vma)
>> >>>> >  {
>> >>>> > @@ -254,6 +291,41 @@ static void exynos_drm_gem_put_pages(struct
>> >>>> drm_gem_object *obj)
>> >>>> >        /* add some codes for UNCACHED type here. TODO */
>> >>>> >  }
>> >>>> >
>> >>>> > +static void exynos_drm_put_userptr(struct drm_gem_object *obj)
>> >>>> > +{
>> >>>> > +       struct exynos_drm_gem_obj *exynos_gem_obj;
>> >>>> > +       struct exynos_drm_gem_buf *buf;
>> >>>> > +       struct vm_area_struct *vma;
>> >>>> > +       int npages;
>> >>>> > +
>> >>>> > +       exynos_gem_obj = to_exynos_gem_obj(obj);
>> >>>> > +       buf = exynos_gem_obj->buffer;
>> >>>> > +       vma = exynos_gem_obj->vma;
>> >>>> > +
>> >>>> > +       if (vma && (vma->vm_flags & VM_PFNMAP) && (vma->vm_pgoff))
> {
>> >>>> > +               put_vma(exynos_gem_obj->vma);
>> >>>> > +               goto out;
>> >>>> > +       }
>> >>>> > +
>> >>>> > +       npages = buf->size >> PAGE_SHIFT;
>> >>>> > +
>> >>>> > +       npages--;
>> >>>> > +       while (npages >= 0) {
>> >>>> > +               if (buf->write)
>> >>>> > +                       set_page_dirty_lock(buf->pages[npages]);
>> >>>> > +
>> >>>> > +               put_page(buf->pages[npages]);
>> >>>> > +               npages--;
>> >>>> > +       }
>> >>>> > +
>> >>>> > +out:
>> >>>> > +       kfree(buf->pages);
>> >>>> > +       buf->pages = NULL;
>> >>>> > +
>> >>>> > +       kfree(buf->sgt);
>> >>>> > +       buf->sgt = NULL;
>> >>>> > +}
>> >>>> > +
>> >>>> >  static int exynos_drm_gem_handle_create(struct drm_gem_object
> *obj,
>> >>>> >                                        struct drm_file *file_priv,
>> >>>> >                                        unsigned int *handle)
>> >>>> > @@ -293,6 +365,8 @@ void exynos_drm_gem_destroy(struct
>> >>>> exynos_drm_gem_obj *exynos_gem_obj)
>> >>>> >
>> >>>> >        if (exynos_gem_obj->flags & EXYNOS_BO_NONCONTIG)
>> >>>> >                exynos_drm_gem_put_pages(obj);
>> >>>> > +       else if (exynos_gem_obj->flags & EXYNOS_BO_USERPTR)
>> >>>> > +               exynos_drm_put_userptr(obj);
>> >>>> >        else
>> >>>> >                exynos_drm_free_buf(obj->dev, exynos_gem_obj->flags,
>> >>> buf);
>> >>>> >
>> >>>> > @@ -606,6 +680,190 @@ int exynos_drm_gem_mmap_ioctl(struct
>> drm_device
>> >>>> *dev, void *data,
>> >>>> >        return 0;
>> >>>> >  }
>> >>>> >
>> >>>> > +static int exynos_drm_get_userptr(struct drm_device *dev,
>> >>>> > +                               struct exynos_drm_gem_obj *obj,
>> >>>> > +                               unsigned long userptr,
>> >>>> > +                               unsigned int write)
>> >>>> > +{
>> >>>> > +       unsigned int get_npages;
>> >>>> > +       unsigned long npages = 0;
>> >>>> > +       struct vm_area_struct *vma;
>> >>>> > +       struct exynos_drm_gem_buf *buf = obj->buffer;
>> >>>> > +
>> >>>> > +       vma = find_vma(current->mm, userptr);
>> >>>> > +
>> >>>> > +       /* the memory region mmaped with VM_PFNMAP. */
>> >>>> > +       if (vma && (vma->vm_flags & VM_PFNMAP) && (vma->vm_pgoff))
> {
>> >>>> > +               unsigned long this_pfn, prev_pfn, pa;
>> >>>> > +               unsigned long start, end, offset;
>> >>>> > +               struct scatterlist *sgl;
>> >>>> > +
>> >>>> > +               start = userptr;
>> >>>> > +               offset = userptr & ~PAGE_MASK;
>> >>>> > +               end = start + buf->size;
>> >>>> > +               sgl = buf->sgt->sgl;
>> >>>> > +
>> >>>> > +               for (prev_pfn = 0; start < end; start += PAGE_SIZE)
> {
>> >>>> > +                       int ret = follow_pfn(vma, start,
> &this_pfn);
>> >>>> > +                       if (ret)
>> >>>> > +                               return ret;
>> >>>> > +
>> >>>> > +                       if (prev_pfn == 0)
>> >>>> > +                               pa = this_pfn << PAGE_SHIFT;
>> >>>> > +                       else if (this_pfn != prev_pfn + 1)
>> >>>> > +                               return -EFAULT;
>> >>>> > +
>> >>>> > +                       sg_dma_address(sgl) = (pa + offset);
>> >>>> > +                       sg_dma_len(sgl) = PAGE_SIZE;
>> >>>> > +                       prev_pfn = this_pfn;
>> >>>> > +                       pa += PAGE_SIZE;
>> >>>> > +                       npages++;
>> >>>> > +                       sgl = sg_next(sgl);
>> >>>> > +               }
>> >>>> > +
>> >>>> > +               buf->dma_addr = pa + offset;
>> >>>> > +
>> >>>> > +               obj->vma = get_vma(vma);
>> >>>> > +               if (!obj->vma)
>> >>>> > +                       return -ENOMEM;
>> >>>> > +
>> >>>> > +               buf->pfnmap = true;
>> >>>> > +
>> >>>> > +               return npages;
>> >>>> > +       }
>> >>>> > +
>> >>>> > +       buf->write = write;
>> >>>> > +       npages = buf->size >> PAGE_SHIFT;
>> >>>> > +
>> >>>> > +       down_read(&current->mm->mmap_sem);
>> >>>> > +       get_npages = get_user_pages(current, current->mm, userptr,
>> >>>> > +                                       npages, write, 1,
> buf->pages,
>> >>> NULL);
>> >>>> > +       up_read(&current->mm->mmap_sem);
>> >>>> > +       if (get_npages != npages)
>> >>>> > +               DRM_ERROR("failed to get user_pages.\n");
>> >>>> > +
>> >>>> > +       buf->pfnmap = false;
>> >>>> > +
>> >>>> > +       return get_npages;
>> >>>> > +}
>> >>>> > +
>> >>>> > +int exynos_drm_gem_userptr_ioctl(struct drm_device *dev, void
>> *data,
>> >>>> > +                                     struct drm_file *file_priv)
>> >>>> > +{
>> >>>> > +       struct exynos_drm_gem_obj *exynos_gem_obj;
>> >>>> > +       struct drm_exynos_gem_userptr *args = data;
>> >>>> > +       struct exynos_drm_gem_buf *buf;
>> >>>> > +       struct scatterlist *sgl;
>> >>>> > +       unsigned long size, userptr;
>> >>>> > +       unsigned int npages;
>> >>>> > +       int ret, get_npages;
>> >>>> > +
>> >>>> > +       DRM_DEBUG_KMS("%s\n", __FILE__);
>> >>>> > +
>> >>>> > +       if (!args->size) {
>> >>>> > +               DRM_ERROR("invalid size.\n");
>> >>>> > +               return -EINVAL;
>> >>>> > +       }
>> >>>> > +
>> >>>> > +       ret = check_gem_flags(args->flags);
>> >>>> > +       if (ret)
>> >>>> > +               return ret;
>> >>>> > +
>> >>>> > +       size = roundup_gem_size(args->size, EXYNOS_BO_USERPTR);
>> >>>> > +       userptr = args->userptr;
>> >>>> > +
>> >>>> > +       buf = exynos_drm_init_buf(dev, size);
>> >>>> > +       if (!buf)
>> >>>> > +               return -ENOMEM;
>> >>>> > +
>> >>>> > +       exynos_gem_obj = exynos_drm_gem_init(dev, size);
>> >>>> > +       if (!exynos_gem_obj) {
>> >>>> > +               ret = -ENOMEM;
>> >>>> > +               goto err_free_buffer;
>> >>>> > +       }
>> >>>> > +
>> >>>> > +       buf->sgt = kzalloc(sizeof(struct sg_table), GFP_KERNEL);
>> >>>> > +       if (!buf->sgt) {
>> >>>> > +               DRM_ERROR("failed to allocate buf->sgt.\n");
>> >>>> > +               ret = -ENOMEM;
>> >>>> > +               goto err_release_gem;
>> >>>> > +       }
>> >>>> > +
>> >>>> > +       npages = size >> PAGE_SHIFT;
>> >>>> > +
>> >>>> > +       ret = sg_alloc_table(buf->sgt, npages, GFP_KERNEL);
>> >>>> > +       if (ret < 0) {
>> >>>> > +               DRM_ERROR("failed to initailize sg table.\n");
>> >>>> > +               goto err_free_sgt;
>> >>>> > +       }
>> >>>> > +
>> >>>> > +       buf->pages = kzalloc(npages * sizeof(struct page *),
>> >>> GFP_KERNEL);
>> >>>> > +       if (!buf->pages) {
>> >>>> > +               DRM_ERROR("failed to allocate buf->pages\n");
>> >>>> > +               ret = -ENOMEM;
>> >>>> > +               goto err_free_table;
>> >>>> > +       }
>> >>>> > +
>> >>>> > +       exynos_gem_obj->buffer = buf;
>> >>>> > +
>> >>>> > +       get_npages = exynos_drm_get_userptr(dev, exynos_gem_obj,
>> >>> userptr,
>> >>>> 1);
>> >>>> > +       if (get_npages != npages) {
>> >>>> > +               DRM_ERROR("failed to get user_pages.\n");
>> >>>> > +               ret = get_npages;
>> >>>> > +               goto err_release_userptr;
>> >>>> > +       }
>> >>>> > +
>> >>>> > +       ret = exynos_drm_gem_handle_create(&exynos_gem_obj->base,
>> >>>> file_priv,
>> >>>> > +                                               &args->handle);
>> >>>> > +       if (ret < 0) {
>> >>>> > +               DRM_ERROR("failed to create gem handle.\n");
>> >>>> > +               goto err_release_userptr;
>> >>>> > +       }
>> >>>> > +
>> >>>> > +       sgl = buf->sgt->sgl;
>> >>>> > +
>> >>>> > +       /*
>> >>>> > +        * if buf->pfnmap is true then update sgl of sgt with pages
>> but
>> >>>> > +        * if buf->pfnmap is false then it means the sgl was
> updated
>> >>>> already
>> >>>> > +        * so it doesn't need to update the sgl.
>> >>>> > +        */
>> >>>> > +       if (!buf->pfnmap) {
>> >>>> > +               unsigned int i = 0;
>> >>>> > +
>> >>>> > +               /* set all pages to sg list. */
>> >>>> > +               while (i < npages) {
>> >>>> > +                       sg_set_page(sgl, buf->pages[i], PAGE_SIZE,
> 0);
>> >>>> > +                       sg_dma_address(sgl) =
>> >>> page_to_phys(buf->pages[i]);
>> >>>> > +                       i++;
>> >>>> > +                       sgl = sg_next(sgl);
>> >>>> > +               }
>> >>>> > +       }
>> >>>> > +
>> >>>> > +       /* always use EXYNOS_BO_USERPTR as memory type for userptr.
>> */
>> >>>> > +       exynos_gem_obj->flags |= EXYNOS_BO_USERPTR;
>> >>>> > +
>> >>>> > +       return 0;
>> >>>> > +
>> >>>> > +err_release_userptr:
>> >>>> > +       get_npages--;
>> >>>> > +       while (get_npages >= 0)
>> >>>> > +               put_page(buf->pages[get_npages--]);
>> >>>> > +       kfree(buf->pages);
>> >>>> > +       buf->pages = NULL;
>> >>>> > +err_free_table:
>> >>>> > +       sg_free_table(buf->sgt);
>> >>>> > +err_free_sgt:
>> >>>> > +       kfree(buf->sgt);
>> >>>> > +       buf->sgt = NULL;
>> >>>> > +err_release_gem:
>> >>>> > +       drm_gem_object_release(&exynos_gem_obj->base);
>> >>>> > +       kfree(exynos_gem_obj);
>> >>>> > +       exynos_gem_obj = NULL;
>> >>>> > +err_free_buffer:
>> >>>> > +       exynos_drm_free_buf(dev, 0, buf);
>> >>>> > +       return ret;
>> >>>> > +}
>> >>>> > +
>> >>>> >  int exynos_drm_gem_init_object(struct drm_gem_object *obj)
>> >>>> >  {
>> >>>> >        DRM_DEBUG_KMS("%s\n", __FILE__);
>> >>>> > diff --git a/drivers/gpu/drm/exynos/exynos_drm_gem.h
>> >>>> b/drivers/gpu/drm/exynos/exynos_drm_gem.h
>> >>>> > index efc8252..1de2241 100644
>> >>>> > --- a/drivers/gpu/drm/exynos/exynos_drm_gem.h
>> >>>> > +++ b/drivers/gpu/drm/exynos/exynos_drm_gem.h
>> >>>> > @@ -29,7 +29,8 @@
>> >>>> >  #define to_exynos_gem_obj(x)   container_of(x,\
>> >>>> >                        struct exynos_drm_gem_obj, base)
>> >>>> >
>> >>>> > -#define IS_NONCONTIG_BUFFER(f)         (f & EXYNOS_BO_NONCONTIG)
>> >>>> > +#define IS_NONCONTIG_BUFFER(f)         ((f & EXYNOS_BO_NONCONTIG)
>> ||\
>> >>>> > +                                       (f & EXYNOS_BO_USERPTR))
>> >>>> >
>> >>>> >  /*
>> >>>> >  * exynos drm gem buffer structure.
>> >>>> > @@ -38,18 +39,23 @@
>> >>>> >  * @dma_addr: bus address(accessed by dma) to allocated memory
>> region.
>> >>>> >  *     - this address could be physical address without IOMMU and
>> >>>> >  *     device address with IOMMU.
>> >>>> > + * @write: whether pages will be written to by the caller.
>> >>>> >  * @sgt: sg table to transfer page data.
>> >>>> >  * @pages: contain all pages to allocated memory region.
>> >>>> >  * @page_size: could be 4K, 64K or 1MB.
>> >>>> >  * @size: size of allocated memory region.
>> >>>> > + * @pfnmap: indicate whether memory region from userptr is mmaped
>> with
>> >>>> > + *     VM_PFNMAP or not.
>> >>>> >  */
>> >>>> >  struct exynos_drm_gem_buf {
>> >>>> >        void __iomem            *kvaddr;
>> >>>> >        dma_addr_t              dma_addr;
>> >>>> > +       unsigned int            write;
>> >>>> >        struct sg_table         *sgt;
>> >>>> >        struct page             **pages;
>> >>>> >        unsigned long           page_size;
>> >>>> >        unsigned long           size;
>> >>>> > +       bool                    pfnmap;
>> >>>> >  };
>> >>>> >
>> >>>> >  /*
>> >>>> > @@ -73,6 +79,7 @@ struct exynos_drm_gem_obj {
>> >>>> >        struct drm_gem_object           base;
>> >>>> >        struct exynos_drm_gem_buf       *buffer;
>> >>>> >        unsigned long                   size;
>> >>>> > +       struct vm_area_struct           *vma;
>> >>>> >        unsigned int                    flags;
>> >>>> >  };
>> >>>> >
>> >>>> > @@ -127,6 +134,10 @@ int exynos_drm_gem_map_offset_ioctl(struct
>> >>>> drm_device *dev, void *data,
>> >>>> >  int exynos_drm_gem_mmap_ioctl(struct drm_device *dev, void *data,
>> >>>> >                              struct drm_file *file_priv);
>> >>>> >
>> >>>> > +/* map user space allocated by malloc to pages. */
>> >>>> > +int exynos_drm_gem_userptr_ioctl(struct drm_device *dev, void
>> *data,
>> >>>> > +                                     struct drm_file *file_priv);
>> >>>> > +
>> >>>> >  /* initialize gem object. */
>> >>>> >  int exynos_drm_gem_init_object(struct drm_gem_object *obj);
>> >>>> >
>> >>>> > diff --git a/include/drm/exynos_drm.h b/include/drm/exynos_drm.h
>> >>>> > index 2d6eb06..48eda6e 100644
>> >>>> > --- a/include/drm/exynos_drm.h
>> >>>> > +++ b/include/drm/exynos_drm.h
>> >>>> > @@ -75,6 +75,23 @@ struct drm_exynos_gem_mmap {
>> >>>> >  };
>> >>>> >
>> >>>> >  /**
>> >>>> > + * User-requested user space importing structure
>> >>>> > + *
>> >>>> > + * @userptr: user space address allocated by malloc.
>> >>>> > + * @size: size to the buffer allocated by malloc.
>> >>>> > + * @flags: indicate user-desired cache attribute to map the
>> allocated
>> >>>> buffer
>> >>>> > + *     to kernel space.
>> >>>> > + * @handle: a returned handle to created gem object.
>> >>>> > + *     - this handle will be set by gem module of kernel side.
>> >>>> > + */
>> >>>> > +struct drm_exynos_gem_userptr {
>> >>>> > +       uint64_t userptr;
>> >>>> > +       uint64_t size;
>> >>>> > +       unsigned int flags;
>> >>>> > +       unsigned int handle;
>> >>>> > +};
>> >>>> > +
>> >>>> > +/**
>> >>>> >  * A structure for user connection request of virtual display.
>> >>>> >  *
>> >>>> >  * @connection: indicate whether doing connetion or not by user.
>> >>>> > @@ -105,13 +122,16 @@ enum e_drm_exynos_gem_mem_type {
>> >>>> >        EXYNOS_BO_CACHABLE      = 1 << 1,
>> >>>> >        /* write-combine mapping. */
>> >>>> >        EXYNOS_BO_WC            = 1 << 2,
>> >>>> > +       /* user space memory allocated by malloc. */
>> >>>> > +       EXYNOS_BO_USERPTR       = 1 << 3,
>> >>>> >        EXYNOS_BO_MASK          = EXYNOS_BO_NONCONTIG |
>> >>> EXYNOS_BO_CACHABLE
>> >>>> |
>> >>>> > -                                       EXYNOS_BO_WC
>> >>>> > +                                       EXYNOS_BO_WC |
> EXYNOS_BO_USERPTR
>> >>>> >  };
>> >>>> >
>> >>>> >  #define DRM_EXYNOS_GEM_CREATE          0x00
>> >>>> >  #define DRM_EXYNOS_GEM_MAP_OFFSET      0x01
>> >>>> >  #define DRM_EXYNOS_GEM_MMAP            0x02
>> >>>> > +#define DRM_EXYNOS_GEM_USERPTR         0x03
>> >>>> >  /* Reserved 0x03 ~ 0x05 for exynos specific gem ioctl */
>> >>>> >  #define DRM_EXYNOS_PLANE_SET_ZPOS      0x06
>> >>>> >  #define DRM_EXYNOS_VIDI_CONNECTION     0x07
>> >>>> > @@ -125,6 +145,9 @@ enum e_drm_exynos_gem_mem_type {
>> >>>> >  #define DRM_IOCTL_EXYNOS_GEM_MMAP      DRM_IOWR(DRM_COMMAND_BASE +
>> \
>> >>>> >                DRM_EXYNOS_GEM_MMAP, struct drm_exynos_gem_mmap)
>> >>>> >
>> >>>> > +#define DRM_IOCTL_EXYNOS_GEM_USERPTR   DRM_IOWR(DRM_COMMAND_BASE +
>> \
>> >>>> > +               DRM_EXYNOS_GEM_USERPTR, struct
> drm_exynos_gem_userptr)
>> >>>> > +
>> >>>> >  #define DRM_IOCTL_EXYNOS_PLANE_SET_ZPOS
>> >>>  DRM_IOWR(DRM_COMMAND_BASE
>> >>>> + \
>> >>>> >                DRM_EXYNOS_PLANE_SET_ZPOS, struct
>> >>> drm_exynos_plane_set_zpos)
>> >>>> >
>> >>>> > --
>> >>>> > 1.7.4.1
>> >>>> >
>> >>>> > _______________________________________________
>> >>>> > dri-devel mailing list
>> >>>> > dri-devel@xxxxxxxxxxxxxxxxxxxxx
>> >>>> > http://lists.freedesktop.org/mailman/listinfo/dri-devel
>> >>>
>> >>> _______________________________________________
>> >>> dri-devel mailing list
>> >>> dri-devel@xxxxxxxxxxxxxxxxxxxxx
>> >>> http://lists.freedesktop.org/mailman/listinfo/dri-devel
>> >> _______________________________________________
>> >> dri-devel mailing list
>> >> dri-devel@xxxxxxxxxxxxxxxxxxxxx
>> >> http://lists.freedesktop.org/mailman/listinfo/dri-devel
>> > _______________________________________________
>> > dri-devel mailing list
>> > dri-devel@xxxxxxxxxxxxxxxxxxxxx
>> > http://lists.freedesktop.org/mailman/listinfo/dri-devel
>
> _______________________________________________
> dri-devel mailing list
> dri-devel@xxxxxxxxxxxxxxxxxxxxx
> http://lists.freedesktop.org/mailman/listinfo/dri-devel
_______________________________________________
dri-devel mailing list
dri-devel@xxxxxxxxxxxxxxxxxxxxx
http://lists.freedesktop.org/mailman/listinfo/dri-devel



[Index of Archives]     [Linux DRI Users]     [Linux Intel Graphics]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [XFree86]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Linux Kernel]     [Linux SCSI]     [XFree86]
  Powered by Linux