On Thu, Mar 11, 2021 at 10:24:38AM -0600, Jason Ekstrand wrote: > On Thu, Mar 11, 2021 at 9:57 AM Daniel Vetter <daniel@xxxxxxxx> wrote: > > > > On Thu, Mar 11, 2021 at 4:50 PM Jason Ekstrand <jason@xxxxxxxxxxxxxx> wrote: > > > > > > On Thu, Mar 11, 2021 at 5:44 AM Zbigniew Kempczyński > > > <zbigniew.kempczynski@xxxxxxxxx> wrote: > > > > > > > > On Wed, Mar 10, 2021 at 03:50:07PM -0600, Jason Ekstrand wrote: > > > > > The Vulkan driver in Mesa for Intel hardware never uses relocations if > > > > > it's running on a version of i915 that supports at least softpin which > > > > > all versions of i915 supporting Gen12 do. On the OpenGL side, Gen12+ is > > > > > only supported by iris which never uses relocations. The older i965 > > > > > driver in Mesa does use relocations but it only supports Intel hardware > > > > > through Gen11 and has been deprecated for all hardware Gen9+. The > > > > > compute driver also never uses relocations. This only leaves the media > > > > > driver which is supposed to be switching to softpin going forward. > > > > > Making softpin a requirement for all future hardware seems reasonable. > > > > > > > > > > Rejecting relocations starting with Gen12 has the benefit that we don't > > > > > have to bother supporting it on platforms with local memory. Given how > > > > > much CPU touching of memory is required for relocations, not having to > > > > > do so on platforms where not all memory is directly CPU-accessible > > > > > carries significant advantages. > > > > > > > > > > v2 (Jason Ekstrand): > > > > > - Allow TGL-LP platforms as they've already shipped > > > > > > > > > > v3 (Jason Ekstrand): > > > > > - WARN_ON platforms with LMEM support in case the check is wrong > > > > > > > > I was asked to review of this patch. It works along with expected > > > > IGT check https://patchwork.freedesktop.org/patch/423361/?series=82954&rev=25 > > > > > > > > Before I'll give you r-b - isn't i915_gem_execbuffer2_ioctl() better place > > > > to do for loop just after copy_from_user() and check relocation_count? > > > > We have an access to exec2_list there, we know the gen so we're able to say > > > > relocations are not supported immediate, without entering i915_gem_do_execbuffer(). > > > > > > I considered that but it adds an extra object list walk for a case > > > which we expect to not happen. I'm not sure how expensive the list > > > walk would be if all we do is check the number of relocations on each > > > object. I guess, if it comes right after a copy_from_user, it's all > > > hot in the cache so it shouldn't matter. Ok. I've convinced myself. > > > I'll move it. > > > > I really wouldn't move it if it's another list walk. Execbuf has a lot > > of fast-paths going on, and we have extensive tests to make sure it > > unwinds correctly in all cases. It's not very intuitive, but execbuf > > code isn't scoring very high on that. > > And here I'd just finished doing the typing to move it. Good thing I > hadn't closed vim yet and it was still in my undo buffer. :-) Before entering "slower" path from my perspective I would just check batch object at that place. We still would have single list walkthrough and quick check on the very beginning. -- Zbigniew > > --Jason > > > -Daniel > > > > > > > > --Jason > > > > > > > -- > > > > Zbigniew > > > > > > > > > > > > > > Signed-off-by: Jason Ekstrand <jason@xxxxxxxxxxxxxx> > > > > > Cc: Dave Airlie <airlied@xxxxxxxxxx> > > > > > Cc: Daniel Vetter <daniel.vetter@xxxxxxxxx> > > > > > --- > > > > > drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c | 15 ++++++++++++--- > > > > > 1 file changed, 12 insertions(+), 3 deletions(-) > > > > > > > > > > diff --git a/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c b/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c > > > > > index 99772f37bff60..b02dbd16bfa03 100644 > > > > > --- a/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c > > > > > +++ b/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c > > > > > @@ -1764,7 +1764,8 @@ eb_relocate_vma_slow(struct i915_execbuffer *eb, struct eb_vma *ev) > > > > > return err; > > > > > } > > > > > > > > > > -static int check_relocations(const struct drm_i915_gem_exec_object2 *entry) > > > > > +static int check_relocations(const struct i915_execbuffer *eb, > > > > > + const struct drm_i915_gem_exec_object2 *entry) > > > > > { > > > > > const char __user *addr, *end; > > > > > unsigned long size; > > > > > @@ -1774,6 +1775,14 @@ static int check_relocations(const struct drm_i915_gem_exec_object2 *entry) > > > > > if (size == 0) > > > > > return 0; > > > > > > > > > > + /* Relocations are disallowed for all platforms after TGL-LP */ > > > > > + if (INTEL_GEN(eb->i915) >= 12 && !IS_TIGERLAKE(eb->i915)) > > > > > + return -EINVAL; > > > > > + > > > > > + /* All discrete memory platforms are Gen12 or above */ > > > > > + if (WARN_ON(HAS_LMEM(eb->i915))) > > > > > + return -EINVAL; > > > > > + > > > > > if (size > N_RELOC(ULONG_MAX)) > > > > > return -EINVAL; > > > > > > > > > > @@ -1807,7 +1816,7 @@ static int eb_copy_relocations(const struct i915_execbuffer *eb) > > > > > if (nreloc == 0) > > > > > continue; > > > > > > > > > > - err = check_relocations(&eb->exec[i]); > > > > > + err = check_relocations(eb, &eb->exec[i]); > > > > > if (err) > > > > > goto err; > > > > > > > > > > @@ -1880,7 +1889,7 @@ static int eb_prefault_relocations(const struct i915_execbuffer *eb) > > > > > for (i = 0; i < count; i++) { > > > > > int err; > > > > > > > > > > - err = check_relocations(&eb->exec[i]); > > > > > + err = check_relocations(eb, &eb->exec[i]); > > > > > if (err) > > > > > return err; > > > > > } > > > > > -- > > > > > 2.29.2 > > > > > > > > > > _______________________________________________ > > > > > dri-devel mailing list > > > > > dri-devel@xxxxxxxxxxxxxxxxxxxxx > > > > > https://lists.freedesktop.org/mailman/listinfo/dri-devel > > > _______________________________________________ > > > dri-devel mailing list > > > dri-devel@xxxxxxxxxxxxxxxxxxxxx > > > https://lists.freedesktop.org/mailman/listinfo/dri-devel > > > > > > > > -- > > Daniel Vetter > > Software Engineer, Intel Corporation > > http://blog.ffwll.ch _______________________________________________ Intel-gfx mailing list Intel-gfx@xxxxxxxxxxxxxxxxxxxxx https://lists.freedesktop.org/mailman/listinfo/intel-gfx