Re: [PATCH 02/22] drm/i915/gtt: Serialise both updates to PDE and our shadow

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 17/06/2019 08:18, Chris Wilson wrote:
Currently, we perform a locked update of the shadow entry when
allocating a page directory entry such that if two clients are
concurrently allocating neighbouring ranges we only insert one new entry
for the pair of them. However, we also need to serialise both clients
wrt to the actual entry in the HW table, or else we may allow one client
or even a third client to proceed ahead of the HW write. My handwave
before was that under the _pathological_ condition we would see the
scratch entry instead of the expected entry, causing a temporary
glitch. That starvation condition will eventually show up in practice, so
fix it.

The reason for the previous cheat was to avoid having to free the extra
allocation while under the spinlock. Now, we keep the extra entry
allocated until the end instead.

Fixes: 1d1b5490b91c ("drm/i915/gtt: Replace struct_mutex serialisation for allocation")
Signed-off-by: Chris Wilson <chris@xxxxxxxxxxxxxxxxxx>
Cc: Matthew Auld <matthew.auld@xxxxxxxxx>
Cc: Mika Kuoppala <mika.kuoppala@xxxxxxxxx>
---

[snip]

static int gen8_preallocate_top_level_pdp(struct i915_ppgtt *ppgtt)
@@ -1819,11 +1831,13 @@ static int gen6_alloc_va_range(struct i915_address_space *vm,
  			       u64 start, u64 length)
  {
  	struct gen6_ppgtt *ppgtt = to_gen6_ppgtt(i915_vm_to_ppgtt(vm));
+	struct i915_page_table *alloc = NULL;
  	struct i915_page_table *pt;
  	intel_wakeref_t wakeref;
  	u64 from = start;
  	unsigned int pde;
  	bool flush = false;
+	int ret;

ret = 0;

wakeref = intel_runtime_pm_get(&vm->i915->runtime_pm); @@ -1832,19 +1846,18 @@ static int gen6_alloc_va_range(struct i915_address_space *vm,
  		const unsigned int count = gen6_pte_count(start, length);
if (pt == vm->scratch_pt) {
-			struct i915_page_table *old;
-
  			spin_unlock(&ppgtt->base.pd.lock);
- pt = alloc_pt(vm);
+			pt = alloc;

We have to reset this, no?

+			if (!pt)
+				pt = alloc_pt(vm);
  			if (IS_ERR(pt))
  				goto unwind_out;

ret = PTR_ERR();

gen6_initialize_pt(vm, pt); - old = cmpxchg(&ppgtt->base.pd.page_table[pde],
-				      vm->scratch_pt, pt);
-			if (old == vm->scratch_pt) {
+			spin_lock(&ppgtt->base.pd.lock);
+			if (ppgtt->base.pd.page_table[pde] == vm->scratch_pt) {
  				ppgtt->base.pd.page_table[pde] = pt;
  				if (i915_vma_is_bound(ppgtt->vma,
  						      I915_VMA_GLOBAL_BIND)) {
@@ -1852,11 +1865,9 @@ static int gen6_alloc_va_range(struct i915_address_space *vm,
  					flush = true;
  				}
  			} else {
-				free_pt(vm, pt);
-				pt = old;
+				alloc = pt;
+				pt = ppgtt->base.pd.page_table[pde];
  			}
-
-			spin_lock(&ppgtt->base.pd.lock);
  		}
atomic_add(count, &pt->used_ptes);
@@ -1868,14 +1879,15 @@ static int gen6_alloc_va_range(struct i915_address_space *vm,
  		gen6_ggtt_invalidate(vm->i915);
  	}
- intel_runtime_pm_put(&vm->i915->runtime_pm, wakeref);
-
-	return 0;
+	goto out;
unwind_out:
-	intel_runtime_pm_put(&vm->i915->runtime_pm, wakeref);
  	gen6_ppgtt_clear_range(vm, from, start - from);
-	return -ENOMEM;
+out:
+	if (alloc)
+		free_pt(vm, alloc);
+	intel_runtime_pm_put(&vm->i915->runtime_pm, wakeref);
+	return ret;
  }
static int gen6_ppgtt_init_scratch(struct gen6_ppgtt *ppgtt)

_______________________________________________
Intel-gfx mailing list
Intel-gfx@xxxxxxxxxxxxxxxxxxxxx
https://lists.freedesktop.org/mailman/listinfo/intel-gfx




[Index of Archives]     [AMD Graphics]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux