On Fri, 2018-10-05 at 17:11 +0200, Philipp Zabel wrote: > On Fri, 2018-09-14 at 18:59 +0200, Lucas Stach wrote: > > Currently there is a small race window where we could manage to arm the > > vblank event from atomic flush, but programming the hardware was too close > > to the frame end, so the hardware will only apply the current state on the > > next vblank. In this case we will send out the commit done event too early > > causing userspace to reuse framebuffes that are still in use. > > > > Instead of using the event arming mechnism, just remember the pending event > > and send it from the vblank IRQ handler, once we are sure that all state > > has been applied sucessfully. > > > > Signed-off-by: Lucas Stach <l.stach@xxxxxxxxxxxxxx> > > --- > > drivers/gpu/drm/imx/ipuv3-crtc.c | 32 ++++++++++++++++++++++++++++---- > > 1 file changed, 28 insertions(+), 4 deletions(-) > > > > diff --git a/drivers/gpu/drm/imx/ipuv3-crtc.c b/drivers/gpu/drm/imx/ipuv3-crtc.c > > index 7d4b710b837a..b0c95565a28d 100644 > > --- a/drivers/gpu/drm/imx/ipuv3-crtc.c > > +++ b/drivers/gpu/drm/imx/ipuv3-crtc.c > > @@ -42,6 +42,7 @@ struct ipu_crtc { > > struct ipu_dc *dc; > > struct ipu_di *di; > > int irq; > > + struct drm_pending_vblank_event *event; > > }; > > > > static inline struct ipu_crtc *to_ipu_crtc(struct drm_crtc *crtc) > > @@ -181,8 +182,31 @@ static const struct drm_crtc_funcs ipu_crtc_funcs = { > > static irqreturn_t ipu_irq_handler(int irq, void *dev_id) > > { > > struct ipu_crtc *ipu_crtc = dev_id; > > + struct drm_crtc *crtc = &ipu_crtc->base; > > + unsigned long flags; > > + int i; > > + > > + drm_crtc_handle_vblank(crtc); > > + > > + if (ipu_crtc->event) { > > + for (i = 0; i < ARRAY_SIZE(ipu_crtc->plane); i++) { > > + struct ipu_plane *plane = ipu_crtc->plane[i]; > > + > > + if (!plane) > > + continue; > > + > > + if (!ipu_plane_atomic_update_done(&plane->base)) > > if (ipu_plane_atomic_update_pending(&plane->base)) > > > + break; > > + } > > > > - drm_crtc_handle_vblank(&ipu_crtc->base); > > + if (i == ARRAY_SIZE(ipu_crtc->plane)) { > > + spin_lock_irqsave(&crtc->dev->event_lock, flags); > > + drm_crtc_send_vblank_event(crtc, ipu_crtc->event); > > + ipu_crtc->event = NULL; > > These two happen under the event spinlock, but where event is set in > ipu_crtc_atomic_flush, the locking is removed. > > > + drm_crtc_vblank_put(crtc); > > + spin_unlock_irqrestore(&crtc->dev->event_lock, flags); > > + } > > + } > > > > return IRQ_HANDLED; > > } > > @@ -229,13 +253,13 @@ static void ipu_crtc_atomic_begin(struct drm_crtc *crtc, > > static void ipu_crtc_atomic_flush(struct drm_crtc *crtc, > > struct drm_crtc_state *old_crtc_state) > > { > > - spin_lock_irq(&crtc->dev->event_lock); > > + struct ipu_crtc *ipu_crtc = to_ipu_crtc(crtc); > > + > > if (crtc->state->event) { > > WARN_ON(drm_crtc_vblank_get(crtc)); > > - drm_crtc_arm_vblank_event(crtc, crtc->state->event); > > + ipu_crtc->event = crtc->state->event; > > We assume here that ipu_crtc->event is NULL and the irq handler is never > running at the same time, otherwise we would drop an event. This is non- > obvious to me, and I think it warrants a comment. > > My understanding is the following: > > - It is virtually impossible for atomic_flush to race against a delayed > previous ipu_irq_handler because the previous commit's commit_tail > would still be waiting for the vblank event to release it from > drm_atomic_helper_wait_for_flip_done. > > However, if the last commit's tail finishes after the irq_handler > calls drm_crtc_send_vblank_event(), and the new commit is issued, and > its tail work scheduled, all before the next line in the irq_handler, > ipu_crtc->event = NULL, then the new commit's tail could call > drm_atomic_helper_commit_planes and therefore ipu_crtc_atomic_flush > and ipu_crtc->event would be overwritten. > > - It is unproblematic for a delayed atomic_flush to race against the > next ipu_irq_handler because ipu_crtc->event will just not be set > when the irq handler checks it, and the vblank event will be deferred > to the next interrupt. How do we proceed with this? Keep the spin lock? regards Philipp _______________________________________________ dri-devel mailing list dri-devel@xxxxxxxxxxxxxxxxxxxxx https://lists.freedesktop.org/mailman/listinfo/dri-devel