Patch "tracing: Avoid possible softlockup in tracing_iter_reset()" has been added to the 4.19-stable tree

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



This is a note to let you know that I've just added the patch titled

    tracing: Avoid possible softlockup in tracing_iter_reset()

to the 4.19-stable tree which can be found at:
    http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=summary

The filename of the patch is:
     tracing-avoid-possible-softlockup-in-tracing_iter_re.patch
and it can be found in the queue-4.19 subdirectory.

If you, or anyone else, feels it should not be added to the stable tree,
please let <stable@xxxxxxxxxxxxxxx> know about it.



commit 0da63077bcb93386dfbd00bc9fe8205d8bf30fd9
Author: Zheng Yejian <zhengyejian@xxxxxxxxxxxxxxx>
Date:   Tue Aug 27 20:46:54 2024 +0800

    tracing: Avoid possible softlockup in tracing_iter_reset()
    
    [ Upstream commit 49aa8a1f4d6800721c7971ed383078257f12e8f9 ]
    
    In __tracing_open(), when max latency tracers took place on the cpu,
    the time start of its buffer would be updated, then event entries with
    timestamps being earlier than start of the buffer would be skipped
    (see tracing_iter_reset()).
    
    Softlockup will occur if the kernel is non-preemptible and too many
    entries were skipped in the loop that reset every cpu buffer, so add
    cond_resched() to avoid it.
    
    Cc: stable@xxxxxxxxxxxxxxx
    Fixes: 2f26ebd549b9a ("tracing: use timestamp to determine start of latency traces")
    Link: https://lore.kernel.org/20240827124654.3817443-1-zhengyejian@xxxxxxxxxxxxxxx
    Suggested-by: Steven Rostedt <rostedt@xxxxxxxxxxx>
    Signed-off-by: Zheng Yejian <zhengyejian@xxxxxxxxxxxxxxx>
    Signed-off-by: Steven Rostedt (Google) <rostedt@xxxxxxxxxxx>
    Signed-off-by: Sasha Levin <sashal@xxxxxxxxxx>

diff --git a/kernel/trace/trace.c b/kernel/trace/trace.c
index 022f50dbc456..63c3c17d406c 100644
--- a/kernel/trace/trace.c
+++ b/kernel/trace/trace.c
@@ -3251,6 +3251,8 @@ void tracing_iter_reset(struct trace_iterator *iter, int cpu)
 			break;
 		entries++;
 		ring_buffer_iter_advance(buf_iter);
+		/* This could be a big loop */
+		cond_resched();
 	}
 
 	per_cpu_ptr(iter->trace_buffer->data, cpu)->skipped_entries = entries;




[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[Index of Archives]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux