Quoting Mika Kuoppala (2019-01-24 15:25:10) > Chris Wilson <chris@xxxxxxxxxxxxxxxxxx> writes: > > > Performing a GPU reset clobbers the fence registers, affecting which > > addresses the tiled GTT mmap access. If the driver does not take > > precautions across a GPU reset, a client may read the wrong values (but > > only within their own buffer as the fence will only be degraded to > > I915_TILING_NONE, reducing the access area). However, as this requires > > performing a read using the indirect GTT at exactly the same time as the > > reset occurs, it can be quite difficult to catch, so repeat the test > > many times and across all cores simultaneously. > > > > Signed-off-by: Chris Wilson <chris@xxxxxxxxxxxxxxxxxx> > > --- > > - gem_set_tiling(fd, handle, i, 2048); > > + control = mmap(NULL, 4096, PROT_WRITE, MAP_SHARED | MAP_ANON, -1, 0); > > + igt_assert(control != MAP_FAILED); > > > > - gtt[i] = gem_mmap__gtt(fd, handle, OBJECT_SIZE, PROT_WRITE); > > - set_domain_gtt(fd, handle); > > - gem_close(fd, handle); > > - } > > + igt_fork(child, ncpus) { > > + int last_pattern = 0; > > + int next_pattern = 1; > > + uint32_t *gtt[2]; > > You throw tiling none out as it is just a distraction and > waste of cycles? Fences being the name of game, waiting on unfenced GTT accessed represents a missed opportunity of detecting the glitch. And the glitch is hard enough to detect. > > - hang = igt_hang_ring(fd, I915_EXEC_RENDER); > > + for (int i = 0; i < ARRAY_SIZE(gtt); i++) { > > + uint32_t handle; > > > > - do { > > - for (i = 0; i < OBJECT_SIZE / 64; i++) { > > - int x = 16*i + (i%16); > > + handle = gem_create(fd, OBJECT_SIZE); > > + gem_set_tiling(fd, handle, I915_TILING_X + i, 2048); > > You could have setup a priori. But this prolly is faster than > one reset cycle of tests so nothing to gain. I was thinking per cpu so we use more fences. But probably not, it depends on the memory access occurring between the reset and the revoke, so number of fences less important, just the sheer volume of traffic to hit the small timing window. -Chris _______________________________________________ Intel-gfx mailing list Intel-gfx@xxxxxxxxxxxxxxxxxxxxx https://lists.freedesktop.org/mailman/listinfo/intel-gfx