Re: [PATCH 1/3] rcu: Use static initializer for krc.lock

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



> <rant>
> I really wish you would crop your email. If I scroll down three pages
> without seeing any reply, I usually stop reading there.
> </rant>
> 
Agree. i will do it in better manner next time.

> > > >  
> > Paul, i have just measured the time duration of the schedule_delayed_work().
> > To do that i used below patch:
> > 
> > <snip>
> > diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c
> > index 02f73f7bbd40..f74ae0f3556e 100644
> > --- a/kernel/rcu/tree.c
> > +++ b/kernel/rcu/tree.c
> > @@ -3232,6 +3232,12 @@ static inline struct rcu_head *attach_rcu_head_to_object(void *obj)
> >         return ((struct rcu_head *) ++ptr);
> >  }
> >  
> > +static void noinline
> > +measure_schedule_delayed_work(struct kfree_rcu_cpu *krcp)
> > +{
> > +       schedule_delayed_work(&krcp->monitor_work, KFREE_DRAIN_JIFFIES);
> > +}
> > +
> >  /*
> >   * Queue a request for lazy invocation of appropriate free routine after a
> >   * grace period. Please note there are three paths are maintained, two are the
> > @@ -3327,8 +3333,7 @@ void kvfree_call_rcu(struct rcu_head *head, rcu_callback_t func)
> >         if (rcu_scheduler_active == RCU_SCHEDULER_RUNNING &&
> >                         !krcp->monitor_todo) {
> >                 krcp->monitor_todo = true;
> > -               schedule_delayed_work(&krcp->monitor_work,
> > -                       expedited_drain ? 0 : KFREE_DRAIN_JIFFIES);
> > +               measure_schedule_delayed_work(krcp);
> >         }
> > <snip>
> > 
> > i have done it for not CONFIG_PREEMPT_RT kernel, i do not have any RT configuration.
> > I run rcuperf to apply the load to see the time taken by the actual placing of the work,
> > i.e. the time taken by schedule_delayed_work():
> > 
> > <snip>
> > root@pc636:/sys/kernel/debug/tracing# cat trace
> > # tracer: function_graph
> > #
> > # function_graph latency trace v1.1.5 on 5.6.0-rc6+
> > # --------------------------------------------------------------------
> > # latency: 0 us, #16/16, CPU#0 | (M:server VP:0, KP:0, SP:0 HP:0 #P:4)
> > #    -----------------
> > #    | task: -0 (uid:0 nice:0 policy:0 rt_prio:0)
> > #    -----------------
> > #
> > #                                       _-----=> irqs-off
> > #                                      / _----=> need-resched
> > #                                     | / _---=> hardirq/softirq
> > #                                     || / _--=> preempt-depth
> > #                                     ||| /
> > #     TIME        CPU  TASK/PID       ||||     DURATION                  FUNCTION CALLS
> > #      |          |     |    |        ||||      |   |                     |   |   |   |
> >   682.384653 |   1)    <idle>-0    |  d.s. |   5.329 us    |  } /* measure_schedule_delayed_work.constprop.86 */
> 
> Strange output. Do you have all functions being traced? That could
> cause overhead.
> 
> Try this:
> 
> 	# echo measure_schedule_delayed_work > set_ftrace_filter
> 	# echo function_graph > current_tracer
> 	# cat trace
> 
> That will give you much better timings of the overhead of a single
> function.
> 
I did exactly how are your steps. I do not filter all available
functions, there is only one set:

<snip>
root@pc636:/sys/kernel/debug/tracing# cat set_ftrace_filter
measure_schedule_delayed_work.constprop.86
root@pc636:/sys/kernel/debug/tracing# cat tracing_thresh
5
root@pc636:/sys/kernel/debug/tracing# cat current_tracer
function_graph
root@pc636:/sys/kernel/debug/tracing#
<snip>

Also i set 5 microseconds threshold to filter out what is less
and added the latency-format trace option.

--
Vlad Rezki



[Index of Archives]     [Linux Samsung SoC]     [Linux Rockchip SoC]     [Linux Actions SoC]     [Linux for Synopsys ARC Processors]     [Linux NFS]     [Linux NILFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]


  Powered by Linux