On Wed, Feb 13, 2013 at 6:06 PM, Ville Syrj?l? <ville.syrjala at linux.intel.com> wrote: >> > diff --git a/drivers/gpu/drm/i915/intel_display.c b/drivers/gpu/drm/i915/intel_display.c >> > index a2e04f7..7e047c1 100644 >> > --- a/drivers/gpu/drm/i915/intel_display.c >> > +++ b/drivers/gpu/drm/i915/intel_display.c >> > @@ -2330,8 +2330,7 @@ intel_pipe_set_base(struct drm_crtc *crtc, int x, int y, >> > return ret; >> > } >> > >> > - if (crtc->fb) >> > - intel_finish_fb(crtc->fb); >> > + intel_crtc_wait_for_pending_flips_locked(crtc); >> >> Ah, I see now why you grab dev->struct_mutex and need to kick waiters from >> the pending flip queue. I think grabbing the mutex twice isn't a major >> offense, since both the crtc disable code and set_base are slowpaths used >> rarely. So what about simply calling wait_for_pending_flips before >> grabbing the mutex in intel_pipe_set_base? We could then also inline >> finish_fb into it's only callsite ... > > I didn't want to slow down intel_pipe_set_base() too much. If we wait > for pending flips before pinning the new fb, we can never achieve any > parallelism there. But if no-one cares about that, we can reorder. I'm confused here - where can we extract parallelism in set_base between waiting for pending flips and the pinning? And imo set_base isn't really critical: It's officially a synchronous thing (we have a vblank wait in there), and if we want to fix that imo the nuclear pageflip should be the answer. -Daniel -- Daniel Vetter Software Engineer, Intel Corporation +41 (0) 79 365 57 48 - http://blog.ffwll.ch