Patch "ring-buffer: Make wake once of ring_buffer_wait() more robust" has been added to the 6.8-stable tree

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



This is a note to let you know that I've just added the patch titled

    ring-buffer: Make wake once of ring_buffer_wait() more robust

to the 6.8-stable tree which can be found at:
    http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=summary

The filename of the patch is:
     ring-buffer-make-wake-once-of-ring_buffer_wait-more-.patch
and it can be found in the queue-6.8 subdirectory.

If you, or anyone else, feels it should not be added to the stable tree,
please let <stable@xxxxxxxxxxxxxxx> know about it.



commit 6d42c58b93ecfefaad182e663d984429ad079d18
Author: Steven Rostedt (Google) <rostedt@xxxxxxxxxxx>
Date:   Fri Mar 15 06:31:15 2024 -0400

    ring-buffer: Make wake once of ring_buffer_wait() more robust
    
    [ Upstream commit b70f2938242a028f8e9473781ede175486a59dc8 ]
    
    The default behavior of ring_buffer_wait() when passed a NULL "cond"
    parameter is to exit the function the first time it is woken up. The
    current implementation uses a counter that starts at zero and when it is
    greater than one it exits the wait_event_interruptible().
    
    But this relies on the internal working of wait_event_interruptible() as
    that code basically has:
    
      if (cond)
        return;
      prepare_to_wait();
      if (!cond)
        schedule();
      finish_wait();
    
    That is, cond is called twice before it sleeps. The default cond of
    ring_buffer_wait() needs to account for that and wait for its counter to
    increment twice before exiting.
    
    Instead, use the seq/atomic_inc logic that is used by the tracing code
    that calls this function. Add an atomic_t seq to rb_irq_work and when cond
    is NULL, have the default callback take a descriptor as its data that
    holds the rbwork and the value of the seq when it started.
    
    The wakeups will now increment the rbwork->seq and the cond callback will
    simply check if that number is different, and no longer have to rely on
    the implementation of wait_event_interruptible().
    
    Link: https://lore.kernel.org/linux-trace-kernel/20240315063115.6cb5d205@xxxxxxxxxxxxxxxxxx
    
    Cc: Masami Hiramatsu <mhiramat@xxxxxxxxxx>
    Cc: Mathieu Desnoyers <mathieu.desnoyers@xxxxxxxxxxxx>
    Fixes: 7af9ded0c2ca ("ring-buffer: Use wait_event_interruptible() in ring_buffer_wait()")
    Signed-off-by: Steven Rostedt (Google) <rostedt@xxxxxxxxxxx>
    Signed-off-by: Sasha Levin <sashal@xxxxxxxxxx>

diff --git a/kernel/trace/ring_buffer.c b/kernel/trace/ring_buffer.c
index ad0d475d1f570..43060a7ae15e7 100644
--- a/kernel/trace/ring_buffer.c
+++ b/kernel/trace/ring_buffer.c
@@ -384,6 +384,7 @@ struct rb_irq_work {
 	struct irq_work			work;
 	wait_queue_head_t		waiters;
 	wait_queue_head_t		full_waiters;
+	atomic_t			seq;
 	bool				waiters_pending;
 	bool				full_waiters_pending;
 	bool				wakeup_full;
@@ -753,6 +754,9 @@ static void rb_wake_up_waiters(struct irq_work *work)
 {
 	struct rb_irq_work *rbwork = container_of(work, struct rb_irq_work, work);
 
+	/* For waiters waiting for the first wake up */
+	(void)atomic_fetch_inc_release(&rbwork->seq);
+
 	wake_up_all(&rbwork->waiters);
 	if (rbwork->full_waiters_pending || rbwork->wakeup_full) {
 		/* Only cpu_buffer sets the above flags */
@@ -881,20 +885,21 @@ rb_wait_cond(struct rb_irq_work *rbwork, struct trace_buffer *buffer,
 	return false;
 }
 
+struct rb_wait_data {
+	struct rb_irq_work		*irq_work;
+	int				seq;
+};
+
 /*
  * The default wait condition for ring_buffer_wait() is to just to exit the
  * wait loop the first time it is woken up.
  */
 static bool rb_wait_once(void *data)
 {
-	long *once = data;
+	struct rb_wait_data *rdata = data;
+	struct rb_irq_work *rbwork = rdata->irq_work;
 
-	/* wait_event() actually calls this twice before scheduling*/
-	if (*once > 1)
-		return true;
-
-	(*once)++;
-	return false;
+	return atomic_read_acquire(&rbwork->seq) != rdata->seq;
 }
 
 /**
@@ -915,14 +920,9 @@ int ring_buffer_wait(struct trace_buffer *buffer, int cpu, int full,
 	struct ring_buffer_per_cpu *cpu_buffer;
 	struct wait_queue_head *waitq;
 	struct rb_irq_work *rbwork;
-	long once = 0;
+	struct rb_wait_data rdata;
 	int ret = 0;
 
-	if (!cond) {
-		cond = rb_wait_once;
-		data = &once;
-	}
-
 	/*
 	 * Depending on what the caller is waiting for, either any
 	 * data in any cpu buffer, or a specific buffer, put the
@@ -944,6 +944,14 @@ int ring_buffer_wait(struct trace_buffer *buffer, int cpu, int full,
 	else
 		waitq = &rbwork->waiters;
 
+	/* Set up to exit loop as soon as it is woken */
+	if (!cond) {
+		cond = rb_wait_once;
+		rdata.irq_work = rbwork;
+		rdata.seq = atomic_read_acquire(&rbwork->seq);
+		data = &rdata;
+	}
+
 	ret = wait_event_interruptible((*waitq),
 				rb_wait_cond(rbwork, buffer, cpu, full, cond, data));
 




[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[Index of Archives]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux