On Tue, Jun 21, 2016 at 09:35:44AM +0200, Daniel Vetter wrote: > On Fri, Jun 17, 2016 at 06:54:48PM +0100, Chris Wilson wrote: > > During cleanup we have to synchronise with the async task we are using > > to initialise and register our fbdev. Currently, we are using a full > > synchronisation on the global domain, but we can restrict this to just > > synchronising up to our task if we remember our cookie. > > > > v2: async_synchronize_cookie() takes an exclusive upper bound, to > > synchronize with our task we have to pass in the next cookie. > > v3: Drop premature disregarding of the active cookie (we need to wait > > until the task is complete before continuing in the teardown). > > > > Signed-off-by: Chris Wilson <chris@xxxxxxxxxxxxxxxxxx> > > Cc: Lukas Wunner <lukas@xxxxxxxxx> > > --- > > drivers/gpu/drm/i915/intel_drv.h | 1 + > > drivers/gpu/drm/i915/intel_fbdev.c | 29 ++++++++++++++++------------- > > 2 files changed, 17 insertions(+), 13 deletions(-) > > > > diff --git a/drivers/gpu/drm/i915/intel_drv.h b/drivers/gpu/drm/i915/intel_drv.h > > index 0c1dc9bae170..b657ddd2d078 100644 > > --- a/drivers/gpu/drm/i915/intel_drv.h > > +++ b/drivers/gpu/drm/i915/intel_drv.h > > @@ -159,6 +159,7 @@ struct intel_framebuffer { > > struct intel_fbdev { > > struct drm_fb_helper helper; > > struct intel_framebuffer *fb; > > + async_cookie_t cookie; > > int preferred_bpp; > > }; > > > > diff --git a/drivers/gpu/drm/i915/intel_fbdev.c b/drivers/gpu/drm/i915/intel_fbdev.c > > index 4babefc51eb2..638e420a59cb 100644 > > --- a/drivers/gpu/drm/i915/intel_fbdev.c > > +++ b/drivers/gpu/drm/i915/intel_fbdev.c > > @@ -538,8 +538,7 @@ static const struct drm_fb_helper_funcs intel_fb_helper_funcs = { > > .fb_probe = intelfb_create, > > }; > > > > -static void intel_fbdev_destroy(struct drm_device *dev, > > - struct intel_fbdev *ifbdev) > > +static void intel_fbdev_destroy(struct intel_fbdev *ifbdev) > > { > > /* We rely on the object-free to release the VMA pinning for > > * the info->screen_base mmaping. Leaking the VMA is simpler than > > @@ -552,12 +551,14 @@ static void intel_fbdev_destroy(struct drm_device *dev, > > drm_fb_helper_fini(&ifbdev->helper); > > > > if (ifbdev->fb) { > > - mutex_lock(&dev->struct_mutex); > > + mutex_lock(&ifbdev->helper.dev->struct_mutex); > > intel_unpin_fb_obj(&ifbdev->fb->base, BIT(DRM_ROTATE_0)); > > - mutex_unlock(&dev->struct_mutex); > > + mutex_unlock(&ifbdev->helper.dev->struct_mutex); > > > > drm_framebuffer_remove(&ifbdev->fb->base); > > } > > + > > + kfree(ifbdev); > > } > > > > /* > > @@ -732,32 +733,34 @@ int intel_fbdev_init(struct drm_device *dev) > > > > static void intel_fbdev_initial_config(void *data, async_cookie_t cookie) > > { > > - struct drm_i915_private *dev_priv = data; > > - struct intel_fbdev *ifbdev = dev_priv->fbdev; > > + struct intel_fbdev *ifbdev = data; > > > > /* Due to peculiar init order wrt to hpd handling this is separate. */ > > if (drm_fb_helper_initial_config(&ifbdev->helper, > > ifbdev->preferred_bpp)) > > - intel_fbdev_fini(dev_priv->dev); > > + intel_fbdev_fini(ifbdev->helper.dev); > > } > > > > void intel_fbdev_initial_config_async(struct drm_device *dev) > > { > > - async_schedule(intel_fbdev_initial_config, to_i915(dev)); > > + struct intel_fbdev *ifbdev = to_i915(dev)->fbdev; > > + > > + ifbdev->cookie = async_schedule(intel_fbdev_initial_config, ifbdev); > > } > > > > void intel_fbdev_fini(struct drm_device *dev) > > { > > struct drm_i915_private *dev_priv = dev->dev_private; > > - if (!dev_priv->fbdev) > > + struct intel_fbdev *ifbdev = dev_priv->fbdev; > > + > > + if (!ifbdev) > > return; > > > > flush_work(&dev_priv->fbdev_suspend_work); > > + if (ifbdev->cookie && !current_is_async()) > > + async_synchronize_cookie(ifbdev->cookie + 1); > > First I went like wtf about the cookie+1, but the main use case for this > function (or intended use-case at least) is to synchronize with everything > before your own async task when you register. To uphold deterministic dev > node ordering ... Yup, it's a total wtf. Definitely scores high on Rusty's how to screw with your API consumers. The whole async-vs-sync kernel is the same. If only the kernel had fences as a completion variable... > Needs a comment in the code imo, this is too suprising: > > /* Only synchronizes with all _preceeding_ async tasks, hence + 1 */ > > Or whatever you feel like. Ok. -Chris -- Chris Wilson, Intel Open Source Technology Centre _______________________________________________ Intel-gfx mailing list Intel-gfx@xxxxxxxxxxxxxxxxxxxxx https://lists.freedesktop.org/mailman/listinfo/intel-gfx