On 03/21, Matthew Wilcox wrote: > > On Thu, Mar 21, 2019 at 05:45:10PM -0400, Waiman Long wrote: > > > To avoid this dire condition and reduce lock hold time of tasklist_lock, > > flush_sigqueue() is modified to pass in a freeing queue pointer so that > > the actual freeing of memory objects can be deferred until after the > > tasklist_lock is released and irq re-enabled. > > I think this is a really bad solution. It looks kind of generic, > but isn't. It's terribly inefficient, and all it's really doing is > deferring the debugging code until we've re-enabled interrupts. Agreed. > We'd be much better off just having a list_head in the caller > and list_splice() the queue->list onto that caller. Then call > __sigqueue_free() for each signal on the queue. This won't work, note the comment which explains the race with sigqueue_free(). Let me think about it... at least we can do something like close_the_race_with_sigqueue_free(struct sigpending *queue) { struct sigqueue *q, *t; list_for_each_entry_safe(q, t, ...) { if (q->flags & SIGQUEUE_PREALLOC) list_del_init(&q->list); } called with ->siglock held, tasklist_lock is not needed. After that flush_sigqueue() can be called lockless in release_task() release_task. I'll try to make the patch tomorrow. Oleg.