On Tue, Jan 23, 2024 at 12:33:37PM -0500, Steven Rostedt wrote: > From: "Steven Rostedt (Google)" <rostedt@xxxxxxxxxxx> > > The trace_mmap() checks if the mmapped data has any previously read data > (the reader.read value of the meta page is non-zero), then it will pre-read > the kbuf to move its internal reader pointer to the same value. > > But when tracefs_cpu_read_buf() calls trace_mmap_load_subbuf(), its > kbuf->subbuffer will not be the same as the mapped data and > kbuffer_load_subbuffer() is called on it and it is returned. But that means > its read pointer has not been updated, and the read data will restart again. > > When the kbuf is updated in trace_mmap_load_subbuf() check the tmap->kbuf to > see if it already read any of the data, and move the kbuffer forward just > like the trace_mmap() does. > > Link: https://lore.kernel.org/linux-trace-devel/Za-Md51snPcIoYFn@xxxxxxxxxx/ > > Fixes: 2ed14b59 ("libtracefs: Add ring buffer memory mapping APIs") > Reported-by: Vincent Donnefort <vdonnefort@xxxxxxxxxx> > Signed-off-by: Steven Rostedt (Google) <rostedt@xxxxxxxxxxx> That fixes fast-forward, event already read are now skipped. > --- > src/tracefs-mmap.c | 6 ++++++ > 1 file changed, 6 insertions(+) > > diff --git a/src/tracefs-mmap.c b/src/tracefs-mmap.c > index d3af453..a288677 100644 > --- a/src/tracefs-mmap.c > +++ b/src/tracefs-mmap.c > @@ -165,6 +165,12 @@ __hidden int trace_mmap_load_subbuf(void *mapping, struct kbuffer *kbuf) > */ > if (data != kbuffer_subbuffer(kbuf)) { > kbuffer_load_subbuffer(kbuf, data); > + /* Move the read pointer forward if need be */ > + if (kbuffer_curr_index(tmap->kbuf)) { > + int size = kbuffer_curr_offset(tmap->kbuf); > + char tmpbuf[size]; > + kbuffer_read_buffer(kbuf, tmpbuf, size); > + } > return 1; > } > > -- > 2.43.0 >
![]() |