On Tue, Oct 06, 2015 at 02:54:25PM +0200, Daniel Vetter wrote: > On Thu, Oct 01, 2015 at 12:18:26PM +0100, Chris Wilson wrote: > > Often it is very useful to know why we suddenly purge vast tracts of > > memory and surprisingly up until now we didn't even have a tracepoint > > for when we shrink our memory. > > > > Signed-off-by: Chris Wilson <chris@xxxxxxxxxxxxxxxxxx> > > --- > > drivers/gpu/drm/i915/i915_gem_shrinker.c | 2 ++ > > drivers/gpu/drm/i915/i915_trace.h | 20 ++++++++++++++++++++ > > 2 files changed, 22 insertions(+) > > > > diff --git a/drivers/gpu/drm/i915/i915_gem_shrinker.c b/drivers/gpu/drm/i915/i915_gem_shrinker.c > > index b627d07fad29..88f66a2586ec 100644 > > --- a/drivers/gpu/drm/i915/i915_gem_shrinker.c > > +++ b/drivers/gpu/drm/i915/i915_gem_shrinker.c > > @@ -85,6 +85,8 @@ i915_gem_shrink(struct drm_i915_private *dev_priv, > > }, *phase; > > unsigned long count = 0; > > > > + trace_i915_gem_shrink(dev_priv, target, flags); > > Shouldn't we also dump how many pages we actually managed to shrink, i.e. > count (at the end of the functions). I didn't because I find the double tracepoints annoying, and you already have the unbinds following. I guess shrink_begin, shrink_end (to be consistent with wait_begin/_end or shrink_start/_end to be consistent with slab). > Also we have a slab_start/end tracepoint already, but that one obviously > doesn't cover the internal calls to i915_gem_shrink. Should imo be > mentioned in the commit message. Sure, I don't usually watch slab, so I don't have a marker for the thousand unbinds as to what caused them. -Chris -- Chris Wilson, Intel Open Source Technology Centre _______________________________________________ Intel-gfx mailing list Intel-gfx@xxxxxxxxxxxxxxxxxxxxx http://lists.freedesktop.org/mailman/listinfo/intel-gfx