On Sat, Nov 25, 2017 at 07:41:55PM +0000, Chris Wilson wrote: > As both the hotplug event and fbdev configuration run asynchronously, it > is possible for them to run concurrently. If configuration fails, we were > freeing the fbdev causing a use-after-free in the hotplug event. > > <7>[ 3069.935211] [drm:intel_fb_initial_config [i915]] Not using firmware configuration > <7>[ 3069.935225] [drm:drm_setup_crtcs] looking for cmdline mode on connector 77 > <7>[ 3069.935229] [drm:drm_setup_crtcs] looking for preferred mode on connector 77 0 > <7>[ 3069.935233] [drm:drm_setup_crtcs] found mode 3200x1800 > <7>[ 3069.935236] [drm:drm_setup_crtcs] picking CRTCs for 8192x8192 config > <7>[ 3069.935253] [drm:drm_setup_crtcs] desired mode 3200x1800 set on crtc 43 (0,0) > <7>[ 3069.935323] [drm:intelfb_create [i915]] no BIOS fb, allocating a new one > <4>[ 3069.967737] general protection fault: 0000 [#1] PREEMPT SMP > <0>[ 3069.977453] --------------------------------- > <4>[ 3069.977457] Modules linked in: i915(+) vgem snd_hda_codec_hdmi snd_hda_codec_realtek snd_hda_codec_generic x86_pkg_temp_thermal intel_powerclamp coretemp crct10dif_pclmul crc32_pclmul ghash_clmulni_intel snd_hda_codec snd_hwdep snd_hda_core snd_pcm r8169 mei_me mii prime_numbers mei i2c_hid pinctrl_geminilake pinctrl_intel [last unloaded: i915] > <4>[ 3069.977492] CPU: 1 PID: 15414 Comm: kworker/1:0 Tainted: G U 4.14.0-CI-CI_DRM_3388+ #1 > <4>[ 3069.977497] Hardware name: Intel Corp. Geminilake/GLK RVP1 DDR4 (05), BIOS GELKRVPA.X64.0062.B30.1708222146 08/22/2017 > <4>[ 3069.977508] Workqueue: events output_poll_execute > <4>[ 3069.977512] task: ffff880177734e40 task.stack: ffffc90001fe4000 > <4>[ 3069.977519] RIP: 0010:__lock_acquire+0x109/0x1b60 > <4>[ 3069.977523] RSP: 0018:ffffc90001fe7bb0 EFLAGS: 00010002 > <4>[ 3069.977526] RAX: 6b6b6b6b6b6b6b6b RBX: 0000000000000282 RCX: 0000000000000000 > <4>[ 3069.977530] RDX: 0000000000000000 RSI: 0000000000000000 RDI: ffff880170d4efd0 > <4>[ 3069.977534] RBP: ffffc90001fe7c70 R08: 0000000000000001 R09: 0000000000000000 > <4>[ 3069.977538] R10: 0000000000000000 R11: ffffffff81899609 R12: ffff880170d4efd0 > <4>[ 3069.977542] R13: ffff880177734e40 R14: 0000000000000001 R15: 0000000000000000 > <4>[ 3069.977547] FS: 0000000000000000(0000) GS:ffff88017fc80000(0000) knlGS:0000000000000000 > <4>[ 3069.977551] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 > <4>[ 3069.977555] CR2: 00007f7e8b7bcf04 CR3: 0000000003e0f000 CR4: 00000000003406e0 > <4>[ 3069.977559] Call Trace: > <4>[ 3069.977565] ? mark_held_locks+0x64/0x90 > <4>[ 3069.977571] ? _raw_spin_unlock_irq+0x24/0x50 > <4>[ 3069.977575] ? _raw_spin_unlock_irq+0x24/0x50 > <4>[ 3069.977579] ? trace_hardirqs_on_caller+0xde/0x1c0 > <4>[ 3069.977583] ? _raw_spin_unlock_irq+0x2f/0x50 > <4>[ 3069.977588] ? finish_task_switch+0xa5/0x210 > <4>[ 3069.977592] ? lock_acquire+0xaf/0x200 > <4>[ 3069.977596] lock_acquire+0xaf/0x200 > <4>[ 3069.977600] ? __mutex_lock+0x5e9/0x9b0 > <4>[ 3069.977604] _raw_spin_lock+0x2a/0x40 > <4>[ 3069.977608] ? __mutex_lock+0x5e9/0x9b0 > <4>[ 3069.977612] __mutex_lock+0x5e9/0x9b0 > <4>[ 3069.977616] ? drm_fb_helper_hotplug_event.part.19+0x16/0xa0 > <4>[ 3069.977621] ? drm_fb_helper_hotplug_event.part.19+0x16/0xa0 > <4>[ 3069.977625] drm_fb_helper_hotplug_event.part.19+0x16/0xa0 > <4>[ 3069.977630] output_poll_execute+0x8d/0x180 > <4>[ 3069.977635] process_one_work+0x22e/0x660 > <4>[ 3069.977640] worker_thread+0x48/0x3a0 > <4>[ 3069.977644] ? _raw_spin_unlock_irqrestore+0x4c/0x60 > <4>[ 3069.977649] kthread+0x102/0x140 > <4>[ 3069.977653] ? process_one_work+0x660/0x660 > <4>[ 3069.977657] ? kthread_create_on_node+0x40/0x40 > <4>[ 3069.977662] ret_from_fork+0x27/0x40 > <4>[ 3069.977666] Code: 8d 62 f8 c3 49 81 3c 24 e0 fa 3c 82 41 be 00 00 00 00 45 0f 45 f0 83 fe 01 77 86 89 f0 49 8b 44 c4 08 48 85 c0 0f 84 76 ff ff ff <f0> ff 80 38 01 00 00 8b 1d 62 f9 e8 01 45 8b 85 b8 08 00 00 85 > <1>[ 3069.977707] RIP: __lock_acquire+0x109/0x1b60 RSP: ffffc90001fe7bb0 > <4>[ 3069.977712] ---[ end trace 4ad012eb3af62df7 ]--- > > In order to keep the dev_priv->ifbdev alive after failure, we have to > avoid the free and leave it empty until we unload the module. Then we > can use intel_fbdev_sync() to serialise the hotplug event with the > configuration. The serialisation between the two was removed in commit > 934458c2c95d ("Revert "drm/i915: Fix races on fbdev""), but the use > after free is much older, commit 366e39b4d2c5 ("drm/i915: Tear down fbdev > if initialization fails") > > Fixes: 366e39b4d2c5 ("drm/i915: Tear down fbdev if initialization fails") > Fixes: 934458c2c95d ("Revert "drm/i915: Fix races on fbdev"") > Signed-off-by: Chris Wilson <chris@xxxxxxxxxxxxxxxxxx> > Cc: Lukas Wunner <lukas@xxxxxxxxx> > Cc: Joonas Lahtinen <joonas.lahtinen@xxxxxxxxxxxxxxx> > Cc: Daniel Vetter <daniel.vetter@xxxxxxxx> > Cc: stable@xxxxxxxxxxxxxxx Patch looks correct to me, though keeping the ifbdev around even if it failed to initialize is regrettable. FWIW, Reviewed-by: Lukas Wunner <lukas@xxxxxxxxx> Thanks, Lukas > --- > drivers/gpu/drm/i915/intel_fbdev.c | 10 ++++++---- > 1 file changed, 6 insertions(+), 4 deletions(-) > > diff --git a/drivers/gpu/drm/i915/intel_fbdev.c b/drivers/gpu/drm/i915/intel_fbdev.c > index b8af35187d22..ea96682568e8 100644 > --- a/drivers/gpu/drm/i915/intel_fbdev.c > +++ b/drivers/gpu/drm/i915/intel_fbdev.c > @@ -697,10 +697,8 @@ static void intel_fbdev_initial_config(void *data, async_cookie_t cookie) > > /* Due to peculiar init order wrt to hpd handling this is separate. */ > if (drm_fb_helper_initial_config(&ifbdev->helper, > - ifbdev->preferred_bpp)) { > + ifbdev->preferred_bpp)) > intel_fbdev_unregister(to_i915(ifbdev->helper.dev)); > - intel_fbdev_fini(to_i915(ifbdev->helper.dev)); > - } > } > > void intel_fbdev_initial_config_async(struct drm_device *dev) > @@ -800,7 +798,11 @@ void intel_fbdev_output_poll_changed(struct drm_device *dev) > { > struct intel_fbdev *ifbdev = to_i915(dev)->fbdev; > > - if (ifbdev) > + if (!ifbdev) > + return; > + > + intel_fbdev_sync(ifbdev); > + if (ifbdev->vma) > drm_fb_helper_hotplug_event(&ifbdev->helper); > } > > -- > 2.15.0 > _______________________________________________ Intel-gfx mailing list Intel-gfx@xxxxxxxxxxxxxxxxxxxxx https://lists.freedesktop.org/mailman/listinfo/intel-gfx