By default, each DRM scheduler instance creates an ordered workqueue for submission, and each workqueue creation allocates a new lockdep map. This becomes problematic when a DRM scheduler is created for every user queue (e.g., in DRM drivers with firmware schedulers like Xe) due to the limited number of available lockdep maps. With numerous user queues being created and destroyed, lockdep may run out of maps, leading to lockdep being disabled. Xe mitigated this by creating a pool of workqueues for DRM scheduler use. However, this approach also encounters issues if the driver is unloaded and reloaded multiple times or if many VFs are probed. To address this, we propose creating a single lockdep map for all DRM scheduler workqueues, which will also resolve issues for other DRM drivers that create a DRM scheduler per user queue. This solution has been tested by unloading and reloading the Xe driver. Before this series, around 30 driver reloads would result in lockdep being turned off. After implementing the series, the driver can be unloaded and reloaded hundreds of times without issues. v2: - Split workqueue changes into multiple patches - Add alloc_workqueue_lockdep_map (Tejun) - Don't RFC Matt Matthew Brost (5): workqueue: Split alloc_workqueue into internal function and lockdep init workqueue: Change workqueue lockdep map to pointer workqueue: Add interface for user-defined workqueue lockdep map drm/sched: Use drm sched lockdep map for submit_wq drm/xe: Drop GuC submit_wq pool drivers/gpu/drm/scheduler/sched_main.c | 11 ++++ drivers/gpu/drm/xe/xe_guc_submit.c | 60 +-------------------- drivers/gpu/drm/xe/xe_guc_types.h | 7 --- include/linux/workqueue.h | 25 +++++++++ kernel/workqueue.c | 75 ++++++++++++++++++++------ 5 files changed, 97 insertions(+), 81 deletions(-) -- 2.34.1