On Mon, Apr 20, 2015 at 01:50:48PM +0100, Chris Wilson wrote: > On Mon, Apr 20, 2015 at 01:14:14PM +0100, Tvrtko Ursulin wrote: > > From: Tvrtko Ursulin <tvrtko.ursulin@xxxxxxxxx> > > > > Using imported objects should not leak i915 vmas (and vms). > > > > In practice this simulates Xorg importing fbcon and leaking (or not) one vma > > per Xorg startup cycle. > > > > Signed-off-by: Tvrtko Ursulin <tvrtko.ursulin@xxxxxxxxx> > > Cc: Chris Wilson <chris@xxxxxxxxxxxxxxxxxx> > > --- > > tests/gem_ppgtt.c | 100 ++++++++++++++++++++++++++++++++++++++++++++++++++++++ > > 1 file changed, 100 insertions(+) > > > > diff --git a/tests/gem_ppgtt.c b/tests/gem_ppgtt.c > > index 5bf773c..9b5b3ee 100644 > > --- a/tests/gem_ppgtt.c > > +++ b/tests/gem_ppgtt.c > > @@ -200,6 +200,103 @@ static void surfaces_check(dri_bo **bo, int count, uint32_t expected) > > } > > } > > > > +static void flink_and_contexts(void) > > +{ > > + drm_intel_bufmgr *bufmgr; > > + int fd, fd2; > > + dri_bo *bo; > > + uint32_t flink_name; > > + unsigned int max_line = 0, num_lines = 0, cnt = 0; > > + int ret; > > + > > + /* > > + * Export a bo via flink and access it from a child process via a > > + * different ppggtt context. Make sure when child exists that the vma > > + * (and hence the vm), associated with its ppgtt context, is torn down. > > + */ > > + > > + fd = drm_open_any(); > > + > > + bufmgr = drm_intel_bufmgr_gem_init(fd, 4096); > > + igt_assert(bufmgr); > > + > > + bo = create_bo(bufmgr, 0); > > + igt_assert(bo); > > + > > + ret = drm_intel_bo_flink(bo, &flink_name); > > + igt_assert(ret == 0); > > + > > + igt_fork(child, 20) { > > + int devid; > > + drm_intel_bufmgr *bufmgr; > > + int fd; > > + dri_bo *bo, *import; > > + struct intel_batchbuffer *batch; > > + > > + fd = drm_open_any(); > > + devid = intel_get_drm_devid(fd); > > + > > + bufmgr = drm_intel_bufmgr_gem_init(fd, 4096); > > + igt_assert(bufmgr); > > + > > + bo = create_bo(bufmgr, 0); > > + import = drm_intel_bo_gem_create_from_name(bufmgr, > > + "flink_and_contexts", > > + flink_name); > > + igt_assert(bo && import); > > + > > + batch = intel_batchbuffer_alloc(bufmgr, devid); > > + igt_assert(batch); > > + > > + intel_copy_bo(batch, bo, import, SIZE); > > Do we care about any differentiation between the default per-file > context and explicitly created contexts? > > > + intel_batchbuffer_free(batch); > > + > > + dri_bo_unreference(import); > > + dri_bo_unreference(bo); > > + > > + drm_intel_bufmgr_destroy(bufmgr); > > + close(fd); > > + > > + exit(0); > > + } > > + > > + igt_waitchildren(); > > + > > + /* > > + * Count the longest line in the file which lists the vmas per object. > > + * This might be a bit fragile so maybe there is a better way. > > + */ > > So what I was thinking was something along the lines of: > > /* record the return value from exec[0].offset */ > intel_copy_bo(&leaked_vma_offset); > > gem_close(exec[0].handle); > > /* flush execbuffer and the vma should be gone */ > gem_sync(); > > intel_copy_bo(&new_vma_offset); > igt_assert(new_vma_offset == leaked_vma_offset); > > Will take a bit of judicious planning (like doing a self-copy on the dst > first to make sure it bound ahead of the leak and not reallocating the > batch handle) > > Or alternatively, we know the name of the flinked object, so we can do > an explicit search! Ok, that seems more robust, but you do some sync > point in there (either gem_sync() every child before exit or > gem_quiescent_gpu()). Another option for making the leak check more robust is to allocate a few test objects and double-check that the count we deduce incremented/decremented by the correct amount. But checking the offsets the kernel hands out from execbuf should work well too, we have a few other tests relying upon that. -Daniel -- Daniel Vetter Software Engineer, Intel Corporation http://blog.ffwll.ch _______________________________________________ Intel-gfx mailing list Intel-gfx@xxxxxxxxxxxxxxxxxxxxx http://lists.freedesktop.org/mailman/listinfo/intel-gfx