On Mon, Mar 30, 2020 at 03:00:35PM +0200, Daniel Vetter wrote: > Hi all, for all = rcu, cpuhotplug and perf maintainers > > We've hit an interesting new lockdep splat in our drm/i915 CI: > > https://intel-gfx-ci.01.org/tree/drm-tip/Patchwork_17096/shard-tglb7/igt@kms_frontbuffer_tracking@xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx#dmesg-warnings861 > > Summarizing away the driver parts we have > > < gpu locks which are held within mm->mmap_sem in various gpu fault handlers > > > -> #4 (&mm->mmap_sem#2){++++}: > <4> [604.892615] __might_fault+0x63/0x90 > <4> [604.892617] _copy_to_user+0x1e/0x80 > <4> [604.892619] perf_read+0x200/0x2b0 > <4> [604.892621] vfs_read+0x96/0x160 > <4> [604.892622] ksys_read+0x9f/0xe0 > <4> [604.892623] do_syscall_64+0x4f/0x220 > <4> [604.892624] entry_SYSCALL_64_after_hwframe+0x49/0xbe > <4> [604.892625] > -> #3 (&cpuctx_mutex){+.+.}: > <4> [604.892626] __mutex_lock+0x9a/0x9c0 > <4> [604.892627] perf_event_init_cpu+0xa4/0x140 > <4> [604.892629] perf_event_init+0x19d/0x1cd > <4> [604.892630] start_kernel+0x362/0x4e4 > <4> [604.892631] secondary_startup_64+0xa4/0xb0 > <4> [604.892631] > -> #2 (pmus_lock){+.+.}: > <4> [604.892633] __mutex_lock+0x9a/0x9c0 > <4> [604.892633] perf_event_init_cpu+0x6b/0x140 > <4> [604.892635] cpuhp_invoke_callback+0x9b/0x9d0 > <4> [604.892636] _cpu_up+0xa2/0x140 > <4> [604.892637] do_cpu_up+0x61/0xa0 > <4> [604.892639] smp_init+0x57/0x96 > <4> [604.892639] kernel_init_freeable+0x87/0x1dc > <4> [604.892640] kernel_init+0x5/0x100 > <4> [604.892642] ret_from_fork+0x24/0x50 > <4> [604.892642] > -> #1 (cpu_hotplug_lock.rw_sem){++++}: > <4> [604.892643] cpus_read_lock+0x34/0xd0 > <4> [604.892644] rcu_barrier+0xaa/0x190 > <4> [604.892645] kernel_init+0x21/0x100 > <4> [604.892647] ret_from_fork+0x24/0x50 > <4> [604.892647] > -> #0 (rcu_state.barrier_mutex){+.+.}: > <4> [604.892649] __lock_acquire+0x1328/0x15d0 > <4> [604.892650] lock_acquire+0xa7/0x1c0 > <4> [604.892651] __mutex_lock+0x9a/0x9c0 > <4> [604.892652] rcu_barrier+0x23/0x190 > <4> [604.892680] i915_gem_object_unbind+0x29d/0x3f0 [i915] > <4> [604.892707] i915_gem_object_pin_to_display_plane+0x141/0x270 [i915] > <4> [604.892737] intel_pin_and_fence_fb_obj+0xec/0x1f0 [i915] > <4> [604.892767] intel_plane_pin_fb+0x3f/0xd0 [i915] > <4> [604.892797] intel_prepare_plane_fb+0x13b/0x5c0 [i915] > <4> [604.892798] drm_atomic_helper_prepare_planes+0x85/0x110 > <4> [604.892827] intel_atomic_commit+0xda/0x390 [i915] > <4> [604.892828] drm_atomic_helper_set_config+0x57/0xa0 > <4> [604.892830] drm_mode_setcrtc+0x1c4/0x720 > <4> [604.892830] drm_ioctl_kernel+0xb0/0xf0 > <4> [604.892831] drm_ioctl+0x2e1/0x390 > <4> [604.892833] ksys_ioctl+0x7b/0x90 > <4> [604.892835] __x64_sys_ioctl+0x11/0x20 > <4> [604.892835] do_syscall_64+0x4f/0x220 > <4> [604.892836] entry_SYSCALL_64_after_hwframe+0x49/0xbe > > The last backtrace boils down to i915 driver code which holds the same > locks we are holding within mm->mmap_sem, and then ends up calling > rcu_barrier(). From what I can see i915 is just the messenger here, > any driver with this pattern of a lock held within mmap_sem which also > has a path of calling rcu_barrier while holding that lock should be > hitting this splat. > > Two questions: > - This suggests that calling rcu_barrier() isn't ok anymore while > holding mmap_sem, or anything that has a dependency upon mmap_sem. I > guess that's not the idea, please confirm. > - Assuming this depedency is indeed not intended, where should the > loop be broken? It goes through perf, cpuhotplug and rcu subsystems, > and I don't have a clue about any of those. Indeed, rcu_barrier() excludes CPU hotplug in order to eliminate a number of interesting races. Am I interpreting the above trace correctly in thinking that the various calls to cpus_read_lock() are with mmap_sem held? If so, can the calls to rcu_barrier() be moved out from under the regions of code protected by cpus_read_lock()? Invoking rcu_barrier() with cpus_read_lock() held is an immediate self-deadlock. Or is rcu_barrier() somehow indirectly sometimes acquiring mmap_sem or pmus_lock? (Not seeing it myself, but...) Thanx, Paul _______________________________________________ dri-devel mailing list dri-devel@xxxxxxxxxxxxxxxxxxxxx https://lists.freedesktop.org/mailman/listinfo/dri-devel