Re: Question about usage of RCU in the input layer

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Sat, 21 Mar 2009 21:38:38 -0700
"Paul E. McKenney" <paulmck@xxxxxxxxxxxxxxxxxx> wrote:

> On Sat, Mar 21, 2009 at 08:40:45PM -0700, Arjan van de Ven wrote:
> > On Sat, 21 Mar 2009 14:07:45 -0700
> > "Paul E. McKenney" <paulmck@xxxxxxxxxxxxxxxxxx> wrote:
> > 
> > > On Sat, Mar 21, 2009 at 09:26:08PM +0100, Eric Dumazet wrote:
> > > > Arjan van de Ven a écrit :
> > > > > On Fri, 20 Mar 2009 18:27:46 -0700
> > > > > "Paul E. McKenney" <paulmck@xxxxxxxxxxxxxxxxxx> wrote:
> > > > > 
> > > > >> On Fri, Mar 20, 2009 at 11:13:54AM -0700, Arjan van de Ven
> > > > >> wrote:
> > > > >>> On Fri, 20 Mar 2009 07:31:04 -0700
> > > > >>> "Paul E. McKenney" <paulmck@xxxxxxxxxxxxxxxxxx> wrote:
> > > > >>>>> that'd be throwing out the baby with the bathwater... I'm
> > > > >>>>> trying to use the other cpus to do some of the boot work
> > > > >>>>> (so that the total goes faster); not using the other cpus
> > > > >>>>> would be counter productive to that. (As is just sitting
> > > > >>>>> in synchronize_rcu() when the other cpu is working..
> > > > >>>>> hence this discussion ;-)
> > > > >>>> OK, so you are definitely running multiple CPUs when the
> > > > >>>> offending synchronize_rcu() executes, then?
> > > > >>> absolutely. 
> > > > >>> (and I'm using bootgraph.pl in scripts to track who's
> > > > >>> stalling etc)
> > > > >>>> If so, here are some follow-on questions:
> > > > >>>>
> > > > >>>> 1.	How many synchronize_rcu() calls are you seeing
> > > > >>>> on the critical boot path
> > > > >>> I've seen only this (input) one to take a long time
> > > > >> Ouch!!!  A -single- synchronize_rcu() taking a full second???
> > > > >> That indicates breakage.
> > > > >>
> > > > >>>>  and what value of HZ are you running?
> > > > >>> 1000
> > > > >> K, in absence of readers for RCU_CLASSIC, we should see a
> > > > >> handful of milliseconds for synchronize_rcu().
> > > > > 
> > > > > I've attached an instrumented bootgraph of what is going on;
> > > > > the rcu delays are shown as red blocks inside the regular
> > > > > functions as they initialize......
> > > > > 
> > > > > (svg can be viewed with inkscape, gimp, firefox and various
> > > > > other tools)
> > > > 
> > > > Interesting stuff...
> > > > 
> > > > I thought you mentioned i2c drivers being source of the
> > > > udelays(), but I cant see them in this svg, unless its
> > > > async_probe_hard ?
> > > 
> > > Arjan, another thought -- if the udelays() are not under
> > > rcu_read_lock(), you should be able to finesse this by using
> > > CONFIG_PREEMPT_RCU, which will happily ignore spinning CPUs as
> > > long as they are not in an RCU read-side critical section.
> > 
> > I'll play with that
> > In the mean time I've reduced the "other" function's time
> > significantly; so the urgency has gone away some.
> 
> Good to hear!
> 
> > It's still "interesting" that even in the "there is only really one
> > thread running" case the minimum delay seems to be 2700 microseconds
> > for classic RCU. Especially during bootup that sounds a bit
> > harsh.... (since that is where many "read mostly" cases actually
> > get their modifications)
> 
> OK, I'll bite...  2700 microseconds measures exactly what?

I'm measuring the time that the following code takes:

        init_completion(&rcu.completion);
        /* Will wake me after RCU finished. */
        call_rcu(&rcu.head, wakeme_after_rcu);
        /* Wait for it. */
        wait_for_completion(&rcu.completion);

in kernel/rcupdate.c:synchronize_rcu();
(I put markings around it for bootgraph to pick up).
it looks like this:
[    0.196157] rcu_waiting @ 1
[    0.198978] rcu_continuing @ 1 after 2929 usec
[    0.199585] rcu_waiting @ 1
[    0.201973] rcu_continuing @ 1 after 2929 usec
[    0.208132] rcu_waiting @ 1
[    0.210905] rcu_continuing @ 1 after 2633 usec
[    0.258025] rcu_waiting @ 1
[    0.260910] rcu_continuing @ 1 after 2742 usec
[    0.260988] rcu_waiting @ 1
[    0.263910] rcu_continuing @ 1 after 2778 usec
[    0.263987] rcu_waiting @ 1
[    0.266910] rcu_continuing @ 1 after 2778 usec
[    0.273030] rcu_waiting @ 1
[    0.275912] rcu_continuing @ 1 after 2738 usec
[    0.636267] rcu_waiting @ 1
[    0.639531] rcu_continuing @ 1 after 3113 usec
[    0.639611] rcu_waiting @ 1
[    0.642006] rcu_continuing @ 1 after 2242 usec
[    0.642086] rcu_waiting @ 1
[    0.645407] rcu_continuing @ 1 after 3169 usec
[    0.645487] rcu_waiting @ 1
[    0.648007] rcu_continuing @ 1 after 2361 usec
[    1.176323] rcu_waiting @ 1
[    1.873021] rcu_continuing @ 1 after 680368 usec
[    1.873108] rcu_waiting @ 1
[    2.046045] rcu_continuing @ 1 after 168881 usec

ok so I was not entirely right; there's a few ones that are a bit
shorter than 2700...


> 
> Also, "really one thread" means hardware threads or software threads?

one software thread. (as per the bootgraph)

> If the former, exactly which kernel are you using?  The single-CPU
> optimization was added in 2.6.29-rc7, commit ID a682604838.

a bit after -rc8, specifically  commit
5bee17f18b595937e6beafeee5197868a3f74a06


-- 
Arjan van de Ven 	Intel Open Source Technology Centre
For development, discussion and tips for power savings, 
visit http://www.lesswatts.org
--
To unsubscribe from this list: send the line "unsubscribe linux-input" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html

[Index of Archives]     [Linux Media Devel]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [Linux Wireless Networking]     [Linux Omap]

  Powered by Linux