Re: [igt-dev] [PATCH igt] test/gem_exec_schedule: Check each engine is an independent timeline

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




On 23/04/2018 18:08, Chris Wilson wrote:
Quoting Tvrtko Ursulin (2018-04-23 17:52:54)

On 23/04/2018 14:43, Chris Wilson wrote:
In the existing ABI, each engine operates its own timeline
(fence.context) and so should execute independently of any other. If we
install a blocker on all other engines, that should not affect execution
on the local engine.

Signed-off-by: Chris Wilson <chris@xxxxxxxxxxxxxxxxxx>
Cc: Tvrtko Ursulin <tvrtko.ursulin@xxxxxxxxx>
---
   tests/gem_exec_schedule.c | 90 +++++++++++++++++++++++++++++++++++----
   1 file changed, 82 insertions(+), 8 deletions(-)

diff --git a/tests/gem_exec_schedule.c b/tests/gem_exec_schedule.c
index 5d0f215b2..471275169 100644
--- a/tests/gem_exec_schedule.c
+++ b/tests/gem_exec_schedule.c
@@ -49,9 +49,9 @@
IGT_TEST_DESCRIPTION("Check that we can control the order of execution"); -static void store_dword(int fd, uint32_t ctx, unsigned ring,
-                     uint32_t target, uint32_t offset, uint32_t value,
-                     uint32_t cork, unsigned write_domain)
+static uint32_t __store_dword(int fd, uint32_t ctx, unsigned ring,
+                           uint32_t target, uint32_t offset, uint32_t value,
+                           uint32_t cork, unsigned write_domain)
   {
       const int gen = intel_gen(intel_get_drm_devid(fd));
       struct drm_i915_gem_exec_object2 obj[3];
@@ -100,7 +100,17 @@ static void store_dword(int fd, uint32_t ctx, unsigned ring,
       batch[++i] = MI_BATCH_BUFFER_END;
       gem_write(fd, obj[2].handle, 0, batch, sizeof(batch));
       gem_execbuf(fd, &execbuf);
-     gem_close(fd, obj[2].handle);
+
+     return obj[2].handle;
+}
+
+static void store_dword(int fd, uint32_t ctx, unsigned ring,
+                     uint32_t target, uint32_t offset, uint32_t value,
+                     uint32_t cork, unsigned write_domain)
+{
+     gem_close(fd, __store_dword(fd, ctx, ring,
+                                 target, offset, value,
+                                 cork, write_domain));
   }
static uint32_t create_highest_priority(int fd)
@@ -161,6 +171,64 @@ static void fifo(int fd, unsigned ring)
       munmap(ptr, 4096);
   }
+static void independent(int fd, unsigned int engine)
+{
+     IGT_CORK_HANDLE(cork);
+     uint32_t scratch, plug, batch;
+     igt_spin_t *spin = NULL;
+     unsigned int other;
+     uint32_t *ptr;
+
+     igt_require(engine != 0);
+
+     scratch = gem_create(fd, 4096);
+     plug = igt_cork_plug(&cork, fd);
+
+     /* Check that we can submit to engine while all others are blocked */
+     for_each_physical_engine(fd, other) {
+             if (other == engine)
+                     continue;
+
+             if (spin == NULL) {
+                     spin = __igt_spin_batch_new(fd, 0, other, 0);
+             } else {
+                     struct drm_i915_gem_exec_object2 obj = {
+                             .handle = spin->handle,
+                     };
+                     struct drm_i915_gem_execbuffer2 eb = {
+                             .buffer_count = 1,
+                             .buffers_ptr = to_user_pointer(&obj),
+                             .flags = other,
+                     };
+                     gem_execbuf(fd, &eb);
+             }
+
+             store_dword(fd, 0, other, scratch, 0, other, plug, 0);
+     }
+     igt_require(spin);
+
+     /* Same priority, but different timeline (as different engine) */
+     batch = __store_dword(fd, 0, engine, scratch, 0, engine, plug, 0);
+
+     unplug_show_queue(fd, &cork, engine);
+     gem_close(fd, plug);
+
+     gem_sync(fd, batch);
+     gem_close(fd, batch);

Strictly speaking I think you need to use the poll-able spinner and wait
on it here, before the busy assert. It's unlikely, but spinners on
'other' engines are getting submitted async to the store dword batch on
'engine'.

We've waited for its completion, so we know batch is idle and the others
are still busy. We then check its seqno is written to the scratch; so
using pollable here is redundant. And then we check that the others are
run after.

Yeah I was confused, thinking busy check on spinner could return false if the respective tasklet on those engines hadn't ran yet - but of course busy is true immediately after execbuf so as I said, total confusion.

Regards,

Tvrtko
_______________________________________________
Intel-gfx mailing list
Intel-gfx@xxxxxxxxxxxxxxxxxxxxx
https://lists.freedesktop.org/mailman/listinfo/intel-gfx




[Index of Archives]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux