On Wed, 2017-12-06 at 12:49 +0000, Chris Wilson wrote: > As writes through the GTT and GGTT PTE updates do not share the same > path, they are not strictly ordered and so we must explicitly flush the > indirect writes prior to modifying the PTE. We do track outstanding GGTT > writes on the object itself, but since the object may have multiple GGTT > vma, that is overly coarse as we can track and flush individual vma as > required. > > Whilst here, update the GGTT flushing behaviour for Cannonlake. > > v2: Hard-code ring offset to allow use during unload (after RCS may have > been freed, or never existed!) > > References: https://bugs.freedesktop.org/show_bug.cgi?id=104002 > Signed-off-by: Chris Wilson <chris@xxxxxxxxxxxxxxxxxx> > Cc: Joonas Lahtinen <joonas.lahtinen@xxxxxxxxxxxxxxx> One comment below, not strictly related to this patch. Reviewed-by: Joonas Lahtinen <joonas.lahtinen@xxxxxxxxxxxxxxx> Regards, Joonas > +static void > +flush_write_domain(struct drm_i915_gem_object *obj, unsigned int flush_domains) > +{ > + struct drm_i915_private *dev_priv = to_i915(obj->base.dev); > + struct i915_vma *vma; > + > + if (!(obj->base.write_domain & flush_domains)) > + return; > + > switch (obj->base.write_domain) { > case I915_GEM_DOMAIN_GTT: > - if (!HAS_LLC(dev_priv)) { > - intel_runtime_pm_get(dev_priv); > - spin_lock_irq(&dev_priv->uncore.lock); > - POSTING_READ_FW(RING_HEAD(dev_priv->engine[RCS]->mmio_base)); > - spin_unlock_irq(&dev_priv->uncore.lock); > - intel_runtime_pm_put(dev_priv); > - } > + i915_gem_flush_ggtt_writes(dev_priv); > > intel_fb_obj_flush(obj, > fb_write_origin(obj, I915_GEM_DOMAIN_GTT)); > + > + list_for_each_entry(vma, &obj->vma_list, obj_link) { > + if (!i915_vma_is_ggtt(vma)) This pattern could use for_each_ggtt_vma() macro or such. Regards, Joonas -- Joonas Lahtinen Open Source Technology Center Intel Corporation _______________________________________________ Intel-gfx mailing list Intel-gfx@xxxxxxxxxxxxxxxxxxxxx https://lists.freedesktop.org/mailman/listinfo/intel-gfx