On 01/24, Steven Rostedt wrote: > > On Fri, 24 Jan 2025 09:38:25 +0000 > Liao Chang <liaochang1@xxxxxxxxxx> wrote: > > > Since clearing a bit in thread_info is an atomic operation, the spinlock > > is redundant and can be removed, reducing lock contention is good for > > performance. > > Although this patch is probably fine, the change log suggests a dangerous > precedence. Just because clearing a flag is atomic, that alone does not > guarantee that it doesn't need spin locks around it. Yes. And iirc we already have the lockless users of clear(TIF_SIGPENDING) (some if not most of them look buggy). But afaics in this (very special) case it should be fine. See also https://lore.kernel.org/all/20240812120738.GC11656@xxxxxxxxxx/ > There may be another path that tests the flag within a spin lock, Yes, retarget_shared_pending() or the complete_signal/wants_signal loop. That is why it was decided to take siglock in uprobe_deny_signal(), just to be "safe". But I still think this patch is fine. The current task is going to execute a single insn which can't enter the kernel and/or return to the userspace before it calls handle_singlestep() and restores TIF_SIGPENDING. We do not care if it races with another source of TIF_SIGPENDING. The only problem is that task_sigpending() from another task can "wrongly" return false in this window, but I don't see any problem. Oleg.