Quoting Tvrtko Ursulin (2020-10-13 11:34:11) > > On 13/10/2020 11:04, Chris Wilson wrote: > > Quoting Tvrtko Ursulin (2020-10-13 10:46:12) > >> From: Tvrtko Ursulin <tvrtko.ursulin@xxxxxxxxx> > >> > >> As it turns out opening the perf fd in group mode still produces separate > >> file descriptors for all members of the group, which in turn need to be > >> closed manually to avoid leaking them. > > > > Hmm. That caught me by surprise, but yes while close(group) does call > > free_event() on all its children [aiui], it will not remove the fd and > > each event does receive its own fd. And since close(child) will call > > into perf_event_release, we do have to keep the fd alive until the end. > > > >> Signed-off-by: Tvrtko Ursulin <tvrtko.ursulin@xxxxxxxxx> > >> --- > >> tests/i915/perf_pmu.c | 130 +++++++++++++++++++++++++----------------- > >> 1 file changed, 78 insertions(+), 52 deletions(-) > >> > >> diff --git a/tests/i915/perf_pmu.c b/tests/i915/perf_pmu.c > >> index 873b275dca6b..6f8bec28d274 100644 > >> --- a/tests/i915/perf_pmu.c > >> +++ b/tests/i915/perf_pmu.c > >> @@ -475,7 +475,8 @@ busy_check_all(int gem_fd, const struct intel_execution_engine2 *e, > >> > >> end_spin(gem_fd, spin, FLAG_SYNC); > >> igt_spin_free(gem_fd, spin); > >> - close(fd[0]); > >> + for (i = 0; i < num_engines; i++) > >> + close(fd[i]); > > > > close_group(fd, num_engines) ? > > I am not too keen on that since there is local open_group which does not > operate on the fd array. Making open_group manage the array and count > crossed my mind but it felt a bit too much. Ok, I was thinking I could live with the implementation asymmetry for the semantic symmetry :) Reviewed-by: Chris Wilson <chris@xxxxxxxxxxxxxxxxxx> [trusting in CI to do a better job validating all the extra loops] I did ponder whether using a dup2() to prove the group was closed (and not closed before the fixes), but that seems pointless. However maybe something like count("/proc/self/fd") at the end to see if we've caught all the leaks? -Chris _______________________________________________ Intel-gfx mailing list Intel-gfx@xxxxxxxxxxxxxxxxxxxxx https://lists.freedesktop.org/mailman/listinfo/intel-gfx