On Fri, May 06, 2016 at 02:53:54PM +0100, Dave Gordon wrote: > On 06/05/16 14:27, Tvrtko Ursulin wrote: > > > >On 06/05/16 12:27, Chris Wilson wrote: > >>On Fri, May 06, 2016 at 12:07:04PM +0100, Tvrtko Ursulin wrote: > >>>From: Tvrtko Ursulin <tvrtko.ursulin@xxxxxxxxx> > >>> > >>>I have noticed some of our interrupt handlers use both dev and > >>>dev_priv while they could get away with only dev_priv in the > >>>huge majority of cases. > >>> > >>>Tidying that up had a cascading effect on changing functions > >>>prototypes, so relatively big churn factor, but I think it is > >>>for the better. > >>> > >>>For example even where changes cascade out of i915_irq.c, for > >>>functions prefixed with intel_, genX_ or <plat>_, it makes more > >>>sense to take dev_priv directly anyway. > >>> > >>>This allows us to eliminate local variables and intermixed usage > >>>of dev and dev_priv where only one is good enough. > >>> > >>>End result is shrinkage of both source and the resulting binary. > >>> > >>>i915.ko: > >>> > >>> - .text 000b0899 > >>> + .text 000b0619 > >>> > >>>Or if we look at the Gen8 display irq chain: > >>> > >>> -00000000000006ad t gen8_irq_handler > >>> +0000000000000663 t gen8_irq_handler > >>> -0000000000000028 T intel_opregion_asle_intr > >>> +0000000000000024 T intel_opregion_asle_intr > >>> -000000000000008c t ilk_hpd_irq_handler > >>> +000000000000007f t ilk_hpd_irq_handler > >>> -0000000000000116 T intel_check_page_flip > >>> +0000000000000112 T intel_check_page_flip > >>> -000000000000011a T intel_prepare_page_flip > >>> +0000000000000119 T intel_prepare_page_flip > >>> -0000000000000014 T intel_finish_page_flip_plane > >>> +0000000000000013 T intel_finish_page_flip_plane > >>> -0000000000000053 t hsw_pipe_crc_irq_handler > >>> +000000000000004c t hsw_pipe_crc_irq_handler > >>> -000000000000022e t cpt_irq_handler > >>> +0000000000000213 t cpt_irq_handler > >>> > >>>So small shrinkage but it is all fast paths so doesn't harm. > >>> > >>>Situation is similar in other interrupt handlers as well. > >>> > >>>Signed-off-by: Tvrtko Ursulin <tvrtko.ursulin@xxxxxxxxx> > >>>--- > >>> drivers/gpu/drm/i915/i915_drv.c | 2 +- > >>> drivers/gpu/drm/i915/i915_drv.h | 13 +- > >>> drivers/gpu/drm/i915/i915_irq.c | 386 > >>>++++++++++++++++------------------ > >>> drivers/gpu/drm/i915/intel_display.c | 33 ++- > >>> drivers/gpu/drm/i915/intel_drv.h | 12 +- > >>> drivers/gpu/drm/i915/intel_hotplug.c | 13 +- > >>> drivers/gpu/drm/i915/intel_opregion.c | 4 +- > >>> drivers/gpu/drm/i915/intel_pm.c | 33 ++- > >>> 8 files changed, 232 insertions(+), 264 deletions(-) > >> > >>I have a similar series with almost identical impact, so from that > >>perspective we should be close. I found we could measure an improvement > >>in IRQ waiter wakeup latency (but only about ~2% so right on the noise > >>level) for gen6/gen7. > >> > >>Patch looks good, spot checking against mine: > >> > >>@@ -1617,8 +1617,7 @@ void gen6_rps_idle(struct drm_i915_private > >>*dev_priv); > >> void gen6_rps_boost(struct drm_i915_private *dev_priv, > >> struct intel_rps_client *rps, > >> unsigned long submitted); > >>-void intel_queue_rps_boost_for_request(struct drm_device *dev, > >>- struct drm_i915_gem_request *req); > >>+void intel_queue_rps_boost_for_request(struct drm_i915_gem_request > >>*req); > > > >Against your tree or your patch? If latter that would be quite amazing! > > > >>Follow up whilst waiting for CI? > > > >Sure. > > > >>Reviewed-by: Chris Wilson <chris@xxxxxxxxxxxxxxxxxx> > > > >Thanks! > > > >Regards, > >Tvrtko > > I've got some Cocci scripts that do the whole kaboodle, including > INTEL_INFO()->gen => INTEL_GEN(), and passing dev_priv to INTEL_INFO() and > all the IS_X() macros wherever possible. Saves ~1.5Kb, but lots and lots of > churn. Perhaps we should pick-n-choose single files or groups of files at a > time? intel_display.c and intel_pm.c are the biggest targets, whereas > intel_lrc.c has only a couple remaining. I think concentrating on hotpaths like this patch here makes sense. And then when things are quiet sneaking in a per-file patch for the more boring bits like intel_pm.c. Not sure about intel_display because that's an ever-growing dumping ground and with atomic still in-flight I definitely don't want to upset this. There's been various plans to split it up a bit (and document while at it). Maybe we could throw in refactorings likes this too. Oh and pls share that cocci magic. -Daniel -- Daniel Vetter Software Engineer, Intel Corporation http://blog.ffwll.ch _______________________________________________ Intel-gfx mailing list Intel-gfx@xxxxxxxxxxxxxxxxxxxxx https://lists.freedesktop.org/mailman/listinfo/intel-gfx