As we already used hld internally for arm64 since 2020, there still doesn't have a proper commit on the upstream and we badly need it. This serise rework on 5.17 from [1] and the origin author is Pingfan Liu <kernelfans@xxxxxxxxx> Sumit Garg <sumit.garg@xxxxxxxxxx> Qoute from [1]: > Hard lockup detector is helpful to diagnose unpaired irq > enable/disable. > But the current watchdog framework can not cope with arm64 hw perf > event > easily. > On arm64, when lockup_detector_init()->watchdog_nmi_probe(), PMU is > not > ready until device_initcall(armv8_pmu_driver_init). And it is deeply > integrated with the driver model and cpuhp. Hence it is hard to push > the > initialization of armv8_pmu_driver_init() before smp_init(). > But it is easy to take an opposite approach by enabling watchdog_hld > to > get the capability of PMU async. > The async model is achieved by expanding watchdog_nmi_probe() with > -EBUSY, and a re-initializing work_struct which waits on a > wait_queue_head. Provide an API - retry_lockup_detector_init() for anyone who needs to delayed init lockup detector. The original assumption is: nobody should use delayed probe after lockup_detector_check() (which has __init attribute). That is, anyone uses this API must call between lockup_detector_init() and lockup_detector_check(), and the caller must have __init attribute The delayed init flow is: 1. lockup_detector_init() -> watchdog_nmi_probe() get non-zero retun, then set allow_lockup_detector_init_retry to true which means it's able to do delayed probe later. 2. PMU arch code init done, call retry_lockup_detector_init(). 3. retry_lockup_detector_init() queue the work only when allow_lockup_detector_init_retry is true which means nobody should call this before lockup_detector_init(). 4. the work lockup_detector_delay_init() is doing without wait event. if probe success, set allow_lockup_detector_init_retry to false. 5. at late_initcall_sync(), lockup_detector_check() set allow_lockup_detector_init_retry to false first to avoid any later retry, and then flush_work() to make sure the __init section won't be freed before the work done. [1] https://lore.kernel.org/lkml/20211014024155.15253-1-kernelfans@xxxxxxxxx/ v7: rebase on v6.0-rc3 v6: fix build failed reported by kernel test robot <lkp@xxxxxxxxx> https://lore.kernel.org/lkml/20220614062835.7196-1-lecopzer.chen@xxxxxxxxxxxx/ v5: 1. rebase on v5.19-rc2 2. change to proper schedule api 3. return value checking before retry_lockup_detector_init() https://lore.kernel.org/lkml/20220613135956.15711-1-lecopzer.chen@xxxxxxxxxxxx/ v4: 1. remove -EBUSY protocal, let all the non-zero value from watchdog_nmi_probe() be able to retry. 2. separate arm64 part patch into hw_nmi_get_sample_period and retry delayed init 3. tweak commit msg that we don't have to limit to -EBUSY 4. rebase on v5.18-rc4 https://lore.kernel.org/lkml/20220427161340.8518-1-lecopzer.chen@xxxxxxxxxxxx/ v3: 1. Tweak commit message in patch 04 2. Remove wait event 3. s/lockup_detector_pending_init/allow_lockup_detector_init_retry/ 4. provide api retry_lockup_detector_init() https://lore.kernel.org/lkml/20220324141405.10835-1-lecopzer.chen@xxxxxxxxxxxx/ v2: 1. Tweak commit message in patch 01/02/04/05 2. Remove vobose WARN in patch 04 within watchdog core. 3. Change from three states variable: detector_delay_init_state to two states variable: allow_lockup_detector_init_retry Thanks Petr Mladek <pmladek@xxxxxxxx> for the idea. > 1. lockup_detector_work() called before lockup_detector_check(). > In this case, wait_event() will wait until > lockup_detector_check() > clears detector_delay_pending_init and calls wake_up(). > 2. lockup_detector_check() called before lockup_detector_work(). > In this case, wait_even() will immediately continue because > it will see cleared detector_delay_pending_init. 4. Add comment in code in patch 04/05 for two states variable changing. https://lore.kernel.org/lkml/20220307154729.13477-1-lecopzer.chen@xxxxxxxxxxxx/ Lecopzer Chen (5): kernel/watchdog: remove WATCHDOG_DEFAULT kernel/watchdog: change watchdog_nmi_enable() to void kernel/watchdog: Adapt the watchdog_hld interface for async model arm64: add hw_nmi_get_sample_period for preparation of lockup detector arm64: Enable perf events based hard lockup detector Pingfan Liu (1): kernel/watchdog_hld: Ensure CPU-bound context when creating hardlockup detector event arch/arm64/Kconfig | 2 + arch/arm64/kernel/Makefile | 1 + arch/arm64/kernel/perf_event.c | 12 +++++- arch/arm64/kernel/watchdog_hld.c | 39 +++++++++++++++++ arch/sparc/kernel/nmi.c | 8 ++-- drivers/perf/arm_pmu.c | 5 +++ include/linux/nmi.h | 4 +- include/linux/perf/arm_pmu.h | 2 + kernel/watchdog.c | 72 +++++++++++++++++++++++++++++--- kernel/watchdog_hld.c | 8 +++- 10 files changed, 139 insertions(+), 14 deletions(-) create mode 100644 arch/arm64/kernel/watchdog_hld.c -- 2.25.1