On Tue, Oct 27, 2020 at 02:39:43PM +0000, David Woodhouse wrote: > From: David Woodhouse <dwmw@xxxxxxxxxxxx> > > This allows an exclusive wait_queue_entry to be added at the head of the > queue, instead of the tail as normal. Thus, it gets to consume events > first without allowing non-exclusive waiters to be woken at all. > > The (first) intended use is for KVM IRQFD, which currently has Do you have more? You could easily special case this inside the KVM code. I don't _think_ the other users of __add_wait_queue() will mind the extra branch, but what do I know. > inconsistent behaviour depending on whether posted interrupts are > available or not. If they are, KVM will bypass the eventfd completely > and deliver interrupts directly to the appropriate vCPU. If not, events > are delivered through the eventfd and userspace will receive them when > polling on the eventfd. > > By using add_wait_queue_priority(), KVM will be able to consistently > consume events within the kernel without accidentally exposing them > to userspace when they're supposed to be bypassed. This, in turn, means > that userspace doesn't have to jump through hoops to avoid listening > on the erroneously noisy eventfd and injecting duplicate interrupts. > > Signed-off-by: David Woodhouse <dwmw@xxxxxxxxxxxx> > --- > include/linux/wait.h | 12 +++++++++++- > kernel/sched/wait.c | 17 ++++++++++++++++- > 2 files changed, 27 insertions(+), 2 deletions(-) > > diff --git a/include/linux/wait.h b/include/linux/wait.h > index 27fb99cfeb02..fe10e8570a52 100644 > --- a/include/linux/wait.h > +++ b/include/linux/wait.h > @@ -22,6 +22,7 @@ int default_wake_function(struct wait_queue_entry *wq_entry, unsigned mode, int > #define WQ_FLAG_BOOKMARK 0x04 > #define WQ_FLAG_CUSTOM 0x08 > #define WQ_FLAG_DONE 0x10 > +#define WQ_FLAG_PRIORITY 0x20 > > /* > * A single wait-queue entry structure: > @@ -164,11 +165,20 @@ static inline bool wq_has_sleeper(struct wait_queue_head *wq_head) > > extern void add_wait_queue(struct wait_queue_head *wq_head, struct wait_queue_entry *wq_entry); > extern void add_wait_queue_exclusive(struct wait_queue_head *wq_head, struct wait_queue_entry *wq_entry); > +extern void add_wait_queue_priority(struct wait_queue_head *wq_head, struct wait_queue_entry *wq_entry); > extern void remove_wait_queue(struct wait_queue_head *wq_head, struct wait_queue_entry *wq_entry); > > static inline void __add_wait_queue(struct wait_queue_head *wq_head, struct wait_queue_entry *wq_entry) > { > - list_add(&wq_entry->entry, &wq_head->head); > + struct list_head *head = &wq_head->head; > + struct wait_queue_entry *wq; > + > + list_for_each_entry(wq, &wq_head->head, entry) { > + if (!(wq->flags & WQ_FLAG_PRIORITY)) > + break; > + head = &wq->entry; > + } > + list_add(&wq_entry->entry, head); > } So you're adding the PRIORITY things to the head of the list and need the PRIORITY flag to keep them in FIFO order there, right? While looking at this I found that weird __add_wait_queue_exclusive() which is used by fs/eventpoll.c and does something similar, except it doesn't keep the FIFO order. The Changelog doesn't state how important this property is to you. > /* > diff --git a/kernel/sched/wait.c b/kernel/sched/wait.c > index 01f5d3020589..183cc6ae68a6 100644 > --- a/kernel/sched/wait.c > +++ b/kernel/sched/wait.c > @@ -37,6 +37,17 @@ void add_wait_queue_exclusive(struct wait_queue_head *wq_head, struct wait_queue > } > EXPORT_SYMBOL(add_wait_queue_exclusive); > > +void add_wait_queue_priority(struct wait_queue_head *wq_head, struct wait_queue_entry *wq_entry) > +{ > + unsigned long flags; > + > + wq_entry->flags |= WQ_FLAG_EXCLUSIVE | WQ_FLAG_PRIORITY; > + spin_lock_irqsave(&wq_head->lock, flags); > + __add_wait_queue(wq_head, wq_entry); > + spin_unlock_irqrestore(&wq_head->lock, flags); > +} > +EXPORT_SYMBOL_GPL(add_wait_queue_priority); > + > void remove_wait_queue(struct wait_queue_head *wq_head, struct wait_queue_entry *wq_entry) > { > unsigned long flags; > @@ -57,7 +68,11 @@ EXPORT_SYMBOL(remove_wait_queue); > /* > * The core wakeup function. Non-exclusive wakeups (nr_exclusive == 0) just > * wake everything up. If it's an exclusive wakeup (nr_exclusive == small +ve > - * number) then we wake all the non-exclusive tasks and one exclusive task. > + * number) then we wake that number of exclusive tasks, and potentially all > + * the non-exclusive tasks. Normally, exclusive tasks will be at the end of > + * the list and any non-exclusive tasks will be woken first. A priority task > + * may be at the head of the list, and can consume the event without any other > + * tasks being woken. > * > * There are circumstances in which we can try to wake a task which has already > * started to run but is not in state TASK_RUNNING. try_to_wake_up() returns > -- > 2.26.2 >