Quoting Matthew Auld (2019-06-17 11:36:36) > On 17/06/2019 08:18, Chris Wilson wrote: > > Currently, we perform a locked update of the shadow entry when > > allocating a page directory entry such that if two clients are > > concurrently allocating neighbouring ranges we only insert one new entry > > for the pair of them. However, we also need to serialise both clients > > wrt to the actual entry in the HW table, or else we may allow one client > > or even a third client to proceed ahead of the HW write. My handwave > > before was that under the _pathological_ condition we would see the > > scratch entry instead of the expected entry, causing a temporary > > glitch. That starvation condition will eventually show up in practice, so > > fix it. > > > > The reason for the previous cheat was to avoid having to free the extra > > allocation while under the spinlock. Now, we keep the extra entry > > allocated until the end instead. > > > > Fixes: 1d1b5490b91c ("drm/i915/gtt: Replace struct_mutex serialisation for allocation") > > Signed-off-by: Chris Wilson <chris@xxxxxxxxxxxxxxxxxx> > > Cc: Matthew Auld <matthew.auld@xxxxxxxxx> > > Cc: Mika Kuoppala <mika.kuoppala@xxxxxxxxx> > > --- > > [snip] > > > > > static int gen8_preallocate_top_level_pdp(struct i915_ppgtt *ppgtt) > > @@ -1819,11 +1831,13 @@ static int gen6_alloc_va_range(struct i915_address_space *vm, > > u64 start, u64 length) > > { > > struct gen6_ppgtt *ppgtt = to_gen6_ppgtt(i915_vm_to_ppgtt(vm)); > > + struct i915_page_table *alloc = NULL; > > struct i915_page_table *pt; > > intel_wakeref_t wakeref; > > u64 from = start; > > unsigned int pde; > > bool flush = false; > > + int ret; > > ret = 0; Yeah, originally this kept to the separate exit paths, forgot to fix after merging. > > wakeref = intel_runtime_pm_get(&vm->i915->runtime_pm); > > > > @@ -1832,19 +1846,18 @@ static int gen6_alloc_va_range(struct i915_address_space *vm, > > const unsigned int count = gen6_pte_count(start, length); > > > > if (pt == vm->scratch_pt) { > > - struct i915_page_table *old; > > - > > spin_unlock(&ppgtt->base.pd.lock); > > > > - pt = alloc_pt(vm); > > + pt = alloc; > > We have to reset this, no? Yes, obviously missed after fixing the pattern for gen6. > > + if (!pt) > > + pt = alloc_pt(vm); > > if (IS_ERR(pt)) > > goto unwind_out; > > ret = PTR_ERR(); Sigh. Thanks, -Chris _______________________________________________ Intel-gfx mailing list Intel-gfx@xxxxxxxxxxxxxxxxxxxxx https://lists.freedesktop.org/mailman/listinfo/intel-gfx