On 12/04/16 17:21, Chris Wilson wrote:
On Tue, Apr 12, 2016 at 04:56:51PM +0100, Tvrtko Ursulin wrote:
From: Chris Wilson <chris@xxxxxxxxxxxxxxxxxx>
By tracking the iomapping on the VMA itself, we can share that area
between multiple users. Also by only revoking the iomapping upon
unbinding from the mappable portion of the GGTT, we can keep that iomap
across multiple invocations (e.g. execlists context pinning).
v2:
* Rebase on nightly;
* added kerneldoc. (Tvrtko Ursulin)
Signed-off-by: Chris Wilson <chris@xxxxxxxxxxxxxxxxxx>
Signed-off-by: Tvrtko Ursulin <tvrtko.ursulin@xxxxxxxxx>
---
drivers/gpu/drm/i915/i915_gem.c | 2 ++
drivers/gpu/drm/i915/i915_gem_gtt.c | 38 +++++++++++++++++++++++++++++++++++++
drivers/gpu/drm/i915/i915_gem_gtt.h | 38 +++++++++++++++++++++++++++++++++++++
drivers/gpu/drm/i915/intel_fbdev.c | 8 +++-----
4 files changed, 81 insertions(+), 5 deletions(-)
diff --git a/drivers/gpu/drm/i915/i915_gem.c b/drivers/gpu/drm/i915/i915_gem.c
index b37ffea8b458..6a485630595e 100644
--- a/drivers/gpu/drm/i915/i915_gem.c
+++ b/drivers/gpu/drm/i915/i915_gem.c
@@ -3393,6 +3393,8 @@ static int __i915_vma_unbind(struct i915_vma *vma, bool wait)
ret = i915_gem_object_put_fence(obj);
if (ret)
return ret;
+
+ i915_vma_iounmap(vma);
}
trace_i915_vma_unbind(vma);
diff --git a/drivers/gpu/drm/i915/i915_gem_gtt.c b/drivers/gpu/drm/i915/i915_gem_gtt.c
index c5cb04907525..b2a8a14e8dcb 100644
--- a/drivers/gpu/drm/i915/i915_gem_gtt.c
+++ b/drivers/gpu/drm/i915/i915_gem_gtt.c
@@ -3626,3 +3626,41 @@ i915_ggtt_view_size(struct drm_i915_gem_object *obj,
return obj->base.size;
}
}
+
+void *i915_vma_iomap(struct drm_i915_private *dev_priv,
+ struct i915_vma *vma)
+{
+ struct drm_i915_gem_object *obj = vma->obj;
+ struct i915_ggtt *ggtt = &dev_priv->ggtt;
+
+ if (WARN_ON(!obj->map_and_fenceable))
+ return ERR_PTR(-ENODEV);
+
+ BUG_ON(!vma->is_ggtt);
+ BUG_ON(vma->ggtt_view.type != I915_GGTT_VIEW_NORMAL);
+ BUG_ON((vma->bound & GLOBAL_BIND) == 0);
+
+ if (vma->iomap == NULL) {
+ void *ptr;
We could extract ggtt from the is_ggtt vma->vm that would remove the
dev_priv parameter and make the callers a bit tidier.
static inline struct i915_ggtt *to_ggtt(struct i915_address_space *vm)
{
BUG_ON(!i915_is_ggtt(vm));
return container_of(vm, struct i915_ggtt, base);
}
+
+ ptr = ioremap_wc(ggtt->mappable_base + vma->node.start,
+ obj->base.size);
+ if (ptr == NULL) {
+ int ret;
+
+ /* Too many areas already allocated? */
+ ret = i915_gem_evict_vm(vma->vm, true);
+ if (ret)
+ return ERR_PTR(ret);
+
+ ptr = ioremap_wc(ggtt->mappable_base + vma->node.start,
+ obj->base.size);
No, we really don't want to create a new ioremap for every caller when
we already have one, ggtt->mappable. Hence,
io-mapping: Specify mapping size for io_mapping_map_wc
being its preceeding patch. The difference is huge on Braswell.
I was following the principle of least change for this one since it does
remain the same in that respect as the current code. Goal was to unblock
the GuC early unpin / execlist no retired queue series and not get into
the trap of inflating the dependencies too much. I thought to add this
and default context cleanup but you keep adding things to the pile. :)
I'll have a look at your io_mapping stuff to see how much it would add
to the series. Remember I said I'll add the stable ctx id patches as
well. Do we need to come up with a plan here?
I could have two separate series to simplify dependencies a bit:
1. GuC premature unpin and
2. execlist no retired queue.
The stack for the first one could look like (not as patches but
conceptually):
1. default context cleanup
2. any io_mapping patches of yours
3. i915_vma_iomap or WC pin_map as you suggested in the other reply.
4. previous / pinned context
Execlist no retired queue would then be:
1. stable ctx id
2. removal of i915_gem_request_unreference__unlocked
3. execlist no retired queue
If I did not forget about something.
At any point in any series we could add things like i915_gem_request.c
or those patches which move around the context init.
Regards,
Tvrtko
_______________________________________________
Intel-gfx mailing list
Intel-gfx@xxxxxxxxxxxxxxxxxxxxx
https://lists.freedesktop.org/mailman/listinfo/intel-gfx