On Tue, Jan 28, 2025 at 8:22 PM Oleg Nesterov <oleg@xxxxxxxxxx> wrote: > > On 01/28, Mateusz Guzik wrote: > > > > On Tue, Jan 28, 2025 at 7:30 PM Oleg Nesterov <oleg@xxxxxxxxxx> wrote: > > > > > no problem, will send a v3 provided there are no issues reported > > concerning the pid stuff > > Great, thanks. > > BTW, I didn't look at the pid stuff yet, I _feel_ that this can be simplified > too, but I am already sleeping, most probably I am wrong. > I looked at pid code apart from the issue at hand. It the lock protecting it uses irq disablement to guard against tasklist_lock users coming from an interrupt. AFAICS this can be legally arranged so that the pidmap_lock is *never* taken while tasklist_lock is held. so far the problematic ordering only stems from free_pid calls (not only on exit), which can all be moved out. this will reduce total tasklist_lock hold time *and* whack the irq trip, speeding this up single-threaded I'll hack it up when I get around to it, maybe this week. btw, with the current patch when rolling with highly parallel thread creation/destruction it is pidmap_lock which is the main bottleneck instead of tasklist_lock -- Mateusz Guzik <mjguzik gmail.com>