On Wed, Nov 23, 2016 at 05:53:50PM +0200, Abdiel Janulgue wrote: > +static int at_exit_drm_fd = -1; > + > +static void quiescent_gpu_at_exit(int sig) > +{ > + if (at_exit_drm_fd < 0) > + return; > + > + gem_quiescent_gpu(at_exit_drm_fd); > + at_exit_drm_fd = -1; > +} > + > +/** > + * igt_spin_batch_new: > + * @fd: open i915 drm file descriptor > + * @engine: Ring to execute batch OR'd with execbuf flags. If value is less > + * than 0, execute on all available rings. > + * @dep_handle: handle to a buffer object dependency. If greater than 0, add a > + * relocation entry to this buffer within the batch. > + * > + * Start a recursive batch on a ring. Immediately returns a #igt_spin_t that > + * contains the batch's handle that can be waited upon. The returned structure > + * must be passed to igt_spin_batch_free() for post-processing. > + * > + * Returns: > + * Structure with helper internal state for igt_spin_batch_free(). > + */ > +igt_spin_t * > +igt_spin_batch_new(int fd, int engine, unsigned dep_handle) > +{ > + igt_spin_t *spin = calloc(1, sizeof(struct igt_spin)); > + uint32_t handle = emit_recursive_batch(fd, engine, dep_handle); > + igt_assert(gem_bo_busy(fd, handle)); > + at_exit_drm_fd = fd; > + igt_install_exit_handler(quiescent_gpu_at_exit); We already do igt_install_exit_handler(quiescent_gpu_at_exit); doing a plain one ourselves still incurs a GPU hang if the user hits ^C. (And we are installing one for every new spinner.) What I meant was that if we kept a list of active spinners and walked that list from gem_quiescent_gpu() we could leverage the existing infrastructure to ensure that the GPU was idled as kick as possible after a failed/interrupted test (without GPU hangs causing havoc). P.S. imagine running checkpatch.pl and keeping in the habit of using the kernel CodingStyle. -Chris -- Chris Wilson, Intel Open Source Technology Centre _______________________________________________ Intel-gfx mailing list Intel-gfx@xxxxxxxxxxxxxxxxxxxxx https://lists.freedesktop.org/mailman/listinfo/intel-gfx