Re: [RFC PATCH v2 1/1] drm/i915: Replace shmem memory region and object backend with TTM

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



>-----Original Message-----
>From: Adrian Larumbe <adrian.larumbe@xxxxxxxxxxxxx>
>Sent: Friday, May 27, 2022 12:08 PM
>To: Ruhl, Michael J <michael.j.ruhl@xxxxxxxxx>
>Cc: daniel@xxxxxxxx; intel-gfx@xxxxxxxxxxxxxxxxxxxxx
>Subject: Re:  [RFC PATCH v2 1/1] drm/i915: Replace shmem memory
>region and object backend with TTM
>
>On 17.05.2022 21:39, Ruhl, Michael J wrote:
>> >-----Original Message-----
>> >From: Intel-gfx <intel-gfx-bounces@xxxxxxxxxxxxxxxxxxxxx> On Behalf Of
>> >Adrian Larumbe
>> >Sent: Tuesday, May 17, 2022 4:45 PM
>> >To: daniel@xxxxxxxx; intel-gfx@xxxxxxxxxxxxxxxxxxxxx
>> >Cc: adrian.larumbe@xxxxxxxxxxxxx
>> >Subject:  [RFC PATCH v2 1/1] drm/i915: Replace shmem memory
>> >region and object backend with TTM
>> >
>> >Remove shmem region and object backend code as they are now
>> >unnecessary.
>> >In the case of objects placed in SMEM and backed by kernel shmem files,
>the
>> >file pointer has to be retrieved from the ttm_tt structure, so change this
>> >as well. This means accessing an shmem-backed BO's file pointer requires
>> >getting its pages beforehand, unlike in the old shmem backend.
>> >
>> >Expand TTM BO creation by handling devices with no LLC so that their
>> >caching and coherency properties are set accordingly.
>> >
>> >Adapt phys backend to put pages of original shmem object in a 'TTM way',
>> >also making sure its pages are put when a TTM shmem object has no struct
>> >page.
>> >
>> >Signed-off-by: Adrian Larumbe <adrian.larumbe@xxxxxxxxxxxxx>
>> >---
>> > drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c   |  12 +-
>> > drivers/gpu/drm/i915/gem/i915_gem_mman.c     |  32 +-
>> > drivers/gpu/drm/i915/gem/i915_gem_object.h   |   4 +-
>> > drivers/gpu/drm/i915/gem/i915_gem_phys.c     |  32 +-
>> > drivers/gpu/drm/i915/gem/i915_gem_shmem.c    | 390 +------------------
>> > drivers/gpu/drm/i915/gem/i915_gem_ttm.c      | 267 ++++++++++++-
>> > drivers/gpu/drm/i915/gem/i915_gem_ttm.h      |   3 +
>> > drivers/gpu/drm/i915/gem/i915_gem_ttm_move.c |   9 +-
>> > drivers/gpu/drm/i915/gt/shmem_utils.c        |  64 ++-
>> > drivers/gpu/drm/i915/intel_memory_region.c   |   7 +-
>> > 10 files changed, 398 insertions(+), 422 deletions(-)
>> >
>> >diff --git a/drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c
>> >b/drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c
>> >index f5062d0c6333..de04ce4210b3 100644
>> >--- a/drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c
>> >+++ b/drivers/gpu/drm/i915/gem/i915_gem_dmabuf.c
>> >@@ -12,6 +12,7 @@
>> > #include <asm/smp.h>
>> >
>> > #include "gem/i915_gem_dmabuf.h"
>> >+#include "gem/i915_gem_ttm.h"
>> > #include "i915_drv.h"
>> > #include "i915_gem_object.h"
>> > #include "i915_scatterlist.h"
>> >@@ -94,22 +95,25 @@ static int i915_gem_dmabuf_mmap(struct dma_buf
>> >*dma_buf, struct vm_area_struct *
>> > {
>> > 	struct drm_i915_gem_object *obj = dma_buf_to_obj(dma_buf);
>> > 	struct drm_i915_private *i915 = to_i915(obj->base.dev);
>> >+	struct file *filp = i915_gem_ttm_get_filep(obj);
>> > 	int ret;
>> >
>> >+	GEM_BUG_ON(obj->base.import_attach != NULL);
>> >+
>> > 	if (obj->base.size < vma->vm_end - vma->vm_start)
>> > 		return -EINVAL;
>> >
>> > 	if (HAS_LMEM(i915))
>> > 		return drm_gem_prime_mmap(&obj->base, vma);
>>
>> Since all of the devices that will have device memory will be true for
>HAS_LMEM,
>> won't your code path always go to drm_gem_prime_mmap()?
>
>This makes me wonder, how was mapping of a dmabuf BO working before, in
>the case
>of DG2 and DG1, when an object is smem-bound, and therefore backed by
>shmem?

LMEM is a new thing.  DG2 and DG1 have it available, but the initial patch
sets did not support access via dma-buf.

With the TTM being used for the LMEM objects, I think that the drm_gem code
was able to be used.

>
>I guess in this case it might be more sensible to control for the case that it's
>an lmem-only object on a discrete platform as follows:
>
>static int i915_gem_dmabuf_mmap(struct dma_buf *dma_buf, struct
>vm_area_struct *vma)
>{
>	struct drm_i915_gem_object *obj = dma_buf_to_obj(dma_buf);
>	struct drm_i915_private *i915 = to_i915(obj->base.dev);
>	struct ttm_buffer_object *bo = i915_gem_to_ttm(obj);
>	struct file *filp = i915_gem_ttm_get_filep(obj);
>	int ret;
>
>	if (obj->base.size < vma->vm_end - vma->vm_start)
>		return -EINVAL;
>
>	if (IS_DGFX(i915) && i915_ttm_cpu_maps_iomem(bo->resource))
>		return drm_gem_prime_mmap(&obj->base, vma);

HAS_LMEM is roughly IS_DGFX...

As for the second part, (_maps_iomem), hmm...

If I am following this correctly drm_gem_prime_mmap() will call i915_gem_mmap,
so I think that just the call (without the iomem check) is correct.

This is a newer mmap code path than the older integrated cards had, so I think that
the context is HAS_LMEM (new and discrete), otherwise the old path.

>	if (IS_ERR(filp))
>		return PTR_ERR(filp);
>
>	ret = call_mmap(filp, vma);
>	if (ret)
>		return ret;
>
>	vma_set_file(vma, filp);
>
>	return 0;
>}
>
>> >-	if (!obj->base.filp)
>> >+	if (!filp)
>> > 		return -ENODEV;
>> >
>> >-	ret = call_mmap(obj->base.filp, vma);
>> >+	ret = call_mmap(filp, vma);
>> > 	if (ret)
>> > 		return ret;
>> >
>> >-	vma_set_file(vma, obj->base.filp);
>> >+	vma_set_file(vma, filp);
>> >
>> > 	return 0;
>> > }
>> >@@ -224,6 +228,8 @@ struct dma_buf *i915_gem_prime_export(struct
>> >drm_gem_object *gem_obj, int flags)
>> > 	exp_info.priv = gem_obj;
>> > 	exp_info.resv = obj->base.resv;
>> >
>> >+	GEM_BUG_ON(obj->base.import_attach != NULL);
>> >+
>> > 	if (obj->ops->dmabuf_export) {
>> > 		int ret = obj->ops->dmabuf_export(obj);
>> > 		if (ret)
>> >diff --git a/drivers/gpu/drm/i915/gem/i915_gem_mman.c
>> >b/drivers/gpu/drm/i915/gem/i915_gem_mman.c
>> >index 0c5c43852e24..d963cf35fdc9 100644
>> >--- a/drivers/gpu/drm/i915/gem/i915_gem_mman.c
>> >+++ b/drivers/gpu/drm/i915/gem/i915_gem_mman.c
>> >@@ -64,7 +64,9 @@ i915_gem_mmap_ioctl(struct drm_device *dev, void
>> >*data,
>> > 	struct drm_i915_private *i915 = to_i915(dev);
>> > 	struct drm_i915_gem_mmap *args = data;
>> > 	struct drm_i915_gem_object *obj;
>> >+	struct file *filp;
>> > 	unsigned long addr;
>> >+	int ret;
>> >
>> > 	/*
>> > 	 * mmap ioctl is disallowed for all discrete platforms,
>> >@@ -83,12 +85,20 @@ i915_gem_mmap_ioctl(struct drm_device *dev,
>void
>> >*data,
>> > 	if (!obj)
>> > 		return -ENOENT;
>> >
>> >-	/* prime objects have no backing filp to GEM mmap
>> >-	 * pages from.
>> >-	 */
>> >-	if (!obj->base.filp) {
>> >-		addr = -ENXIO;
>> >-		goto err;
>> >+	if (obj->base.import_attach)
>> >+		filp = obj->base.filp;
>>
>> Why is this now ok?  This is the imported object. And it used to give an error.
>>
>> If you are importing from a different device, (i.e. an amd device is exporting
>> the object, and you are i915 here), can you even do this?
>>
>> My understanding was that mmaping was only for the exported object.
>>
>> Has this changed?
>
>You're right here, I completely misunderstood how this function is meant to
>deal
>with imported objects. This arose as a consequence of the file pointer being
>moved into the ttm_tt structure from the base DRM object.

Hmm, one other thing that I am seeing here for i915_gem_mmap_ioctl:

	/*
	 * mmap ioctl is disallowed for all discrete platforms,
	 * and for all platforms with GRAPHICS_VER > 12.
	 */
	if (IS_DGFX(i915) || GRAPHICS_VER_FULL(i915) > IP_VER(12, 0))
		return -EOPNOTSUPP;

You need to use the i915_gem_mmap_offset_ioctl() to do mmap with the discrete 
(i.e. devices with LMEM).

So are we looking at the right code path?

>The way I now check for the case that it's an imported object and therefore
>this
>function should throw back an error is as follows:
>
>/* prime objects have no backing filp to GEM mmap
> * pages from.
> */
>if (obj->base.import_attach) {
>	GEM_WARN_ON(obj->base.filp != NULL);
>	addr = -ENXIO;
>	goto err;
>}
>
>I believe the import_attach member has to be set for all imported
>members. However, this just made me think, what would happen in the case
>we want
>to mat a BO that we have exported for another driver to use? The
>import_attach
>field would be set so my code would return with an error, even though we
>should
>be in full control of mmaping it.

I am not following your comment.

import_attach is the BO pointer to the dmabuf attachment.

If I export a BO, I cannot set this pointer on the BO because it is for the
import side only.

>Maybe by allowing the function to go forward in case the base GEM object's
>dma-buf
>operations are the same as our driver's:
>
>i915_gem_dmabuf.c:
>
>const struct dma_buf_ops *i915_get_dmabuf_ops(void) {
>        return &i915_dmabuf_ops;
>}

These will only get set on the exported BO.

>i915_gem_mman.c:
>
>/* prime objects have no backing filp to GEM mmap
> * pages from.
> */
>if (obj->base.import_attach &&
>    obj->base.dma_buf->ops != i915_get_dmabuf_ops()) {
>	GEM_WARN_ON(obj->base.filp != NULL);
>	addr = -ENXIO;
>	goto err;
>}

So you would never have import_attach and i915_dmabuf_ops.

Can you restate what you are trying to solve here?  Maybe that
will make things more clear?

Thanks,

M


>> Thanks,
>>
>> Mike
>
>Cheers,
>Adrian




[Index of Archives]     [AMD Graphics]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux