Quoting Matthew Auld (2019-05-07 11:55:56) > Some steps in gen6_alloc_va_range require the HW to be awake, so ideally > we should be grabbing the wakeref ourselves and not relying on the > caller already holding it for us. > > Suggested-by: Chris Wilson <chris@xxxxxxxxxxxxxxxxxx> > Signed-off-by: Matthew Auld <matthew.auld@xxxxxxxxx> > --- > drivers/gpu/drm/i915/i915_gem_gtt.c | 8 +++++++- > 1 file changed, 7 insertions(+), 1 deletion(-) > > diff --git a/drivers/gpu/drm/i915/i915_gem_gtt.c b/drivers/gpu/drm/i915/i915_gem_gtt.c > index 8f5db787b7f2..ffb8c3d011e9 100644 > --- a/drivers/gpu/drm/i915/i915_gem_gtt.c > +++ b/drivers/gpu/drm/i915/i915_gem_gtt.c > @@ -1745,10 +1745,13 @@ static int gen6_alloc_va_range(struct i915_address_space *vm, > { > struct gen6_hw_ppgtt *ppgtt = to_gen6_ppgtt(i915_vm_to_ppgtt(vm)); > struct i915_page_table *pt; > + intel_wakeref_t wakeref; > u64 from = start; > unsigned int pde; > bool flush = false; > > + wakeref = intel_runtime_pm_get(vm->i915); > + > gen6_for_each_pde(pt, &ppgtt->base.pd, start, length, pde) { > const unsigned int count = gen6_pte_count(start, length); > > @@ -1774,12 +1777,15 @@ static int gen6_alloc_va_range(struct i915_address_space *vm, > > if (flush) { > mark_tlbs_dirty(&ppgtt->base); > - gen6_ggtt_invalidate(ppgtt->base.vm.i915); > + gen6_ggtt_invalidate(vm->i915); > } > > + intel_runtime_pm_put(vm->i915, wakeref); > + > return 0; > > unwind_out: > + intel_runtime_pm_put(vm->i915, wakeref); > gen6_ppgtt_clear_range(vm, from, start - from); > return -ENOMEM; Reviewed-by: Chris Wilson <chris@xxxxxxxxxxxxxxxxxx> It's a bit too fiddly here to try and defer it until the next time the HW is awake -- and really if we are adjusting the iova, then we are going to be using the HW, and normally would be under a longterm wakeref (e.g. execbuf) but for in-kernel clients, we need to be more precise. -Chris _______________________________________________ Intel-gfx mailing list Intel-gfx@xxxxxxxxxxxxxxxxxxxxx https://lists.freedesktop.org/mailman/listinfo/intel-gfx