On ke, 2015-12-16 at 11:42 +0100, Daniel Vetter wrote: > On Tue, Dec 15, 2015 at 01:36:07PM +0200, Mika Kuoppala wrote: > > Chris Wilson <chris@xxxxxxxxxxxxxxxxxx> writes: > > > > > On Mon, Dec 14, 2015 at 07:14:23PM +0200, Mika Kuoppala wrote: > > > > When we drop caches, this debugfs entry does hardware access > > > > later in > > > > the chain, when fences are updated, so it needs a runtime pm > > > > ref. > > > > > > > > Dropping caches is used by some igt/bat tests, so this fixes > > > > some unclaimed register access traces. > > > > > > > > Signed-off-by: Mika Kuoppala <mika.kuoppala@xxxxxxxxx> > > > > --- > > > > drivers/gpu/drm/i915/i915_debugfs.c | 3 +++ > > > > 1 file changed, 3 insertions(+) > > > > > > > > diff --git a/drivers/gpu/drm/i915/i915_debugfs.c > > > > b/drivers/gpu/drm/i915/i915_debugfs.c > > > > index 24318b7..bd8ba7e 100644 > > > > --- a/drivers/gpu/drm/i915/i915_debugfs.c > > > > +++ b/drivers/gpu/drm/i915/i915_debugfs.c > > > > @@ -4839,6 +4839,8 @@ i915_drop_caches_set(void *data, u64 val) > > > > if (ret) > > > > return ret; > > > > > > > > + intel_runtime_pm_get(dev_priv); > > > > > > The current idea of the very coarse granularity of rpm_get() is > > > to do it > > > before struct_mutex (since rpm_get resume may try to acquire the > > > mutex > > > iirc). > > > > > > Ok, fixing that may be bolting the stable door after the horse > > > bolted, > > > but we should nevertheless. In my opinion, it would be more > > > productive > > > to work with Imre on making rpm fine grained so that we don't so > > > many > > > and can actually place the wakelock around the hardware access > > > itself, > > > not every single path that *may* touch hardware. > > > > Please consider 1/6 v2 as it is needed to avoid random unclaimed > > accesses during igt/bat if the drop caches is used in wrong spot. > > Yeah wakelocks work like locks, except lockdep doesn't check them > because > they're not really locks in all aspects. But anyway, we need to obey > ordering constraints. > > Hm ... maybe it would be possible to annotate get/put_rpm with a > lockdep > context (and trylock for the others), and then also acquire that > context > around the actual resume/suspend functions? Might be worth a shot as > a > tech demo, perhaps even adding that to the pm core and pinging Rafael > for > feedback. Since auditing correctness here is way too much pain. Yea, this idea came up already: http://lists.freedesktop.org/archives/intel-gfx/2015-November/079807.html and I have a preliminary patch at: https://github.com/ideak/linux/commit/8f63aaaef27f3c56a1996ea4ac6a8393e0af4e44 --Imre _______________________________________________ Intel-gfx mailing list Intel-gfx@xxxxxxxxxxxxxxxxxxxxx http://lists.freedesktop.org/mailman/listinfo/intel-gfx