From: Pingfan Liu <kernelfans@xxxxxxxxx> hardlockup_detector_event_create() should create perf_event on the current CPU. Preemption could not get disabled because perf_event_create_kernel_counter() allocates memory. Instead, the CPU locality is achieved by processing the code in a per-CPU bound kthread. Add a check to prevent mistakes when calling the code in another code path. Signed-off-by: Pingfan Liu <kernelfans@xxxxxxxxx> Co-developed-by: Lecopzer Chen <lecopzer.chen@xxxxxxxxxxxx> Signed-off-by: Lecopzer Chen <lecopzer.chen@xxxxxxxxxxxx> Reviewed-by: Petr Mladek <pmladek@xxxxxxxx> Signed-off-by: Douglas Anderson <dianders@xxxxxxxxxxxx> --- I yanked this patch from the mailing lists [1] into my series just to make it easier to avoid conflicts between my series and the one adding the arm64 perf hardlockup detector, in case someone wanted to test them both together. This is a nice cleanup and could land together with the rest of my series if that makes sense. I changed the patch prefix to match others in my series. [1] https://lore.kernel.org/r/20220903093415.15850-4-lecopzer.chen@xxxxxxxxxxxx/ (no changes since v4) Changes in v4: - Pulled ("Ensure CPU-bound context when creating ...") into my series for v4. kernel/watchdog_hld.c | 8 +++++++- 1 file changed, 7 insertions(+), 1 deletion(-) diff --git a/kernel/watchdog_hld.c b/kernel/watchdog_hld.c index 1e8a49dc956e..2125b09e09d7 100644 --- a/kernel/watchdog_hld.c +++ b/kernel/watchdog_hld.c @@ -165,10 +165,16 @@ static void watchdog_overflow_callback(struct perf_event *event, static int hardlockup_detector_event_create(void) { - unsigned int cpu = smp_processor_id(); + unsigned int cpu; struct perf_event_attr *wd_attr; struct perf_event *evt; + /* + * Preemption is not disabled because memory will be allocated. + * Ensure CPU-locality by calling this in per-CPU kthread. + */ + WARN_ON(!is_percpu_thread()); + cpu = raw_smp_processor_id(); wd_attr = &wd_hw_attr; wd_attr->sample_period = hw_nmi_get_sample_period(watchdog_thresh); -- 2.40.1.698.g37aff9b760-goog