Re: [PATCH igt] igt/gem_eio: Exercise set-wedging against request submission

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Quoting Antonio Argenziano (2018-02-20 22:31:58)
> On 07/02/18 01:50, Chris Wilson wrote:
> > +static void test_set_wedged(int fd)
> > +{
> > +#define NCTX 4096
> > +     const uint32_t bbe = MI_BATCH_BUFFER_END;
> > +     const int ring_size = measure_ring_size(fd, 0) - 1;
> > +     struct drm_i915_gem_execbuffer2 execbuf;
> > +     struct drm_i915_gem_exec_object2 obj;
> > +     struct itimerspec its;
> > +     struct sigevent sev;
> > +     uint32_t *contexts;
> > +     timer_t timer;
> > +     int timeline;
> > +     int syncpt;
> > +
> > +     contexts = calloc(NCTX, sizeof(*contexts));
> 
> This is pretty static now, will it not be in the future?

Even so, large arrays need to be mallocked. NCTX cannot be 4096 on all
systems.

> > +     igt_assert(contexts);
> > +
> > +     for (int n = 0; n < NCTX; n++)
> > +             contexts[n] = gem_context_create(fd);
> > +
> > +     timeline = sw_sync_timeline_create();
> > +
> > +     memset(&obj, 0, sizeof(obj));
> > +     obj.handle = gem_create(fd, 4096);
> > +     gem_write(fd, obj.handle, 0, &bbe, sizeof(bbe));
> > +
> > +     memset(&execbuf, 0, sizeof(execbuf));
> > +     execbuf.buffers_ptr = to_user_pointer(&obj);
> > +     execbuf.buffer_count = 1;
> > +     execbuf.flags = I915_EXEC_FENCE_IN;
> > +
> > +     /* Build up a large orderly queue of requests */
> > +     syncpt = 1;
> > +     for (int n = 0; n < NCTX; n++) {
> > +             execbuf.rsvd1 = contexts[n];
> > +             for (int m = 0; m < ring_size; m++) {
> > +                     execbuf.rsvd2 =
> > +                             sw_sync_timeline_create_fence(timeline, syncpt);
> > +                     gem_execbuf(fd, &execbuf);
> > +                     close(execbuf.rsvd2);
> > +
> > +                     syncpt++;
> > +             }
> > +     }
> > +     igt_debug("Queued %d requests\n", syncpt);
> > +
> > +     igt_require(i915_reset_control(false));
> 
> Move require to before building the queue of requests so it can skip 
> quicker.

We've already tested reset_control before this point, so having it in
igt_require() is moot.

> > +     /* Feed each request in at 20KHz */
> > +     memset(&sev, 0, sizeof(sev));
> > +     sev.sigev_notify = SIGEV_THREAD;
> > +     sev.sigev_value.sival_int = timeline;
> > +     sev.sigev_notify_function = notify;
> > +     igt_assert(timer_create(CLOCK_MONOTONIC, &sev, &timer) == 0);
> > +
> > +     memset(&its, 0, sizeof(its));
> > +     its.it_value.tv_sec = 0;
> > +     its.it_value.tv_nsec = 20000;
> > +     its.it_interval.tv_sec = 0;
> > +     its.it_interval.tv_nsec = 5000;
> > +     igt_assert(timer_settime(timer, 0, &its, NULL) == 0);
> > +
> > +     igt_debug("Triggering wedge\n");
> > +     wedgeme(fd);
> 
> Does it hit the race consistently? I mean how useful would it be to put 
> the whole subtest in a loop?

You could run it for a few hours and still not expect to hit the small
windows where the submit vfunc is changed as it is executing. (Never
considering it's supposed to be serialised ;)
-Chris
_______________________________________________
Intel-gfx mailing list
Intel-gfx@xxxxxxxxxxxxxxxxxxxxx
https://lists.freedesktop.org/mailman/listinfo/intel-gfx




[Index of Archives]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux