Quoting Mika Kuoppala (2019-08-20 15:25:50) > Chris Wilson <chris@xxxxxxxxxxxxxxxxxx> writes: > > > The current assertion tries to make sure that we do not over count the > > number of used PDE inside a page directory -- that is with an array of > > 512 pde, we do not expect more than 512 elements used! However, our > > assertion has to take into account that as we pin an element into the > > page directory, the caller first pins the page directory so the usage > > count is one higher. However, this should be one extra pin per thread, > > and the upper bound is that we may have one thread for each entry. > > > > Signed-off-by: Chris Wilson <chris@xxxxxxxxxxxxxxxxxx> > > Cc: Mika Kuoppala <mika.kuoppala@xxxxxxxxxxxxxxx> > > --- > > drivers/gpu/drm/i915/i915_gem_gtt.c | 3 ++- > > 1 file changed, 2 insertions(+), 1 deletion(-) > > > > diff --git a/drivers/gpu/drm/i915/i915_gem_gtt.c b/drivers/gpu/drm/i915/i915_gem_gtt.c > > index e48df11a19fb..9435d184ddf2 100644 > > --- a/drivers/gpu/drm/i915/i915_gem_gtt.c > > +++ b/drivers/gpu/drm/i915/i915_gem_gtt.c > > @@ -771,7 +771,8 @@ __set_pd_entry(struct i915_page_directory * const pd, > > struct i915_page_dma * const to, > > u64 (*encode)(const dma_addr_t, const enum i915_cache_level)) > > { > > - GEM_BUG_ON(atomic_read(px_used(pd)) > ARRAY_SIZE(pd->entry)); > > + /* Each thread pre-pins the pd, and we may have a thread per each pde */ > > + GEM_BUG_ON(atomic_read(px_used(pd)) > 2 * ARRAY_SIZE(pd->entry)); > > When I saw +1 wrt array_size that should have rang some bells between > my ears. I did increase it to +1 for the upper pinning but > the parallelism escaped me and no more bells were rung. It completely escaped me and I had the reason to make sure this worked with multiple threads! Thanks for the review, -Chris _______________________________________________ Intel-gfx mailing list Intel-gfx@xxxxxxxxxxxxxxxxxxxxx https://lists.freedesktop.org/mailman/listinfo/intel-gfx