Re: [PATCH] tracing: Fix trace entry and trace common fields for preempt_lazy_count

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Fri, Feb 21, 2020 at 11:21:52AM -0500, Steven Rostedt wrote:
> On Fri, 21 Feb 2020 17:10:30 +0100
> Jiri Olsa <jolsa@xxxxxxxxxx> wrote:
> 
> > On Fri, Feb 21, 2020 at 10:49:22AM -0500, Steven Rostedt wrote:
> > > On Fri, 21 Feb 2020 16:35:41 +0100
> > > Jiri Olsa <jolsa@xxxxxxxxxx> wrote:
> > >   
> > > > When commit 65fd07df3588 added preempt_lazy_count into 'struct trace_entry'
> > > > it did not add 4 bytes padding. Also we need to update the common fields
> > > > for tracepoint, otherwise some tools (bpftrace) stop working due to missing
> > > > common fields.
> > > > 
> > > > Fixes: 65fd07df3588 ("x86: Support for lazy preemption")
> > > > Signed-off-by: Jiri Olsa <jolsa@xxxxxxxxxx>
> > > > ---
> > > >  include/linux/trace_events.h | 2 ++
> > > >  kernel/trace/trace_events.c  | 3 +++
> > > >  2 files changed, 5 insertions(+)
> > > > 
> > > > diff --git a/include/linux/trace_events.h b/include/linux/trace_events.h
> > > > index f3b1ef07e4a5..51a3f5188923 100644
> > > > --- a/include/linux/trace_events.h
> > > > +++ b/include/linux/trace_events.h
> > > > @@ -65,6 +65,8 @@ struct trace_entry {
> > > >  	unsigned short		migrate_disable;
> > > >  	unsigned short		padding;
> > > >  	unsigned char		preempt_lazy_count;
> > > > +	unsigned char		padding1;
> > > > +	unsigned short		padding2;  
> > > 
> > > 
> > > Wait! I don't have these changes in my tree, nor do I see them in
> > > Linus's. This really bloats the trace events! This header is very
> > > sensitive to size and just willy nilly adding to it is unacceptable.
> > > It's like adding to the page_struct. This gets added to *every* event,
> > > and a single byte added, causes 1M extra for a million events (very
> > > common in tracing). It causes 1G extra for a billion events.  
> > 
> > I'm on top of:
> >   git://git.kernel.org/pub/scm/linux/kernel/git/rt/linux-rt-devel.git
> >   v5.4.19-rt11-rebase
> > 
> 
> Ug, I now see it:
> 
> struct trace_entry {
> 	unsigned short		type;
> 	unsigned char		flags;
> 	unsigned char		preempt_count;
> 	int			pid;
> 	unsigned short		migrate_disable;
> 	unsigned short		padding;
> 	unsigned char		preempt_lazy_count;
> };
> 
> Which adds a ton of bloat.
> 
> > > 
> > > Let's find a better way to handle this.  
> > 
> > I can fix the bpftrace tool I guess, through it's not
> > so convenient the way it's used in it
> 
> Not as inconvenient as dropping events due to wasted space in the ring
> buffer. Note, this is attached to function tracing events. Any increase
> here will cause more function events to be dropped.

sure, I'll probably fix it anyway, but there might be other broken tools ;-)

libtraceevent/perf is actualy ok with this, probably following the
offsets and sizes directly.. actualy bpftrace might be special case
because it creates C struct out of the fields, so there's gap between
common fields and the rest of the fields

jirka

> 
> Why is migrate disable a short? Is there going to be more that 256
> nesting?
> 
> -- Steve
> 




[Index of Archives]     [RT Stable]     [Kernel Newbies]     [IDE]     [Security]     [Git]     [Netfilter]     [Bugtraq]     [Yosemite]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Linux ATA RAID]     [Samba]     [Video 4 Linux]     [Device Mapper]

  Powered by Linux