Re: [RFC][PATCH 0/3] update to cpupri algorithm

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Mon, 2011-08-01 at 10:18 -0400, Steven Rostedt wrote:
> On Sat, 2011-07-30 at 11:19 +0200, Mike Galbraith wrote:
> > On Fri, 2011-07-29 at 11:13 -0400, Steven Rostedt wrote:
> > > Hi Mike,
> > > 
> > > Could you try this patch set out. Add the first patch and then
> > > run your tests. The first patch only adds benchmarking, and does not
> > > modify the scheduler algorithm.
> > > 
> > > Do this:
> > > 
> > > 1. apply first patch, build and boot
> > > 2. # mount -t debugfs nodev /sys/kernel/debug
> > > 3. # echo 0 > /sys/kernel/debug/cpupri; ./runtest; cat /sys/kernel/debug/cpupri > output
> > > 
> > > The output will give you the contention of the vector locks in the
> > > cpupri algorithm.
> > > 
> > > Then apply the second patch and do the same thing.
> > > 
> > > Then apply the third patch and do the same thing.
> > > 
> > > After that, could you send me the results of the output file for all
> > > three runs?  The final patch should probably be the best overall
> > > results.
> > 
> > These patches are RFC, so here's my Comment.  Steven rocks.
> 
> /me blushes!

Don't, they're excellent.  /me was having one _hell_ of a hard time
trying to convince box that somewhat tight constraint realtime really
really should be possible on isolated CPUs.

> Thanks for testing! I'll redo the patches to remove the logging, and
> send them to you again. Could you return back a 'Tested-by' tag
> afterward.

(I did the logging removal, the posted numbers were that, but..)

Sure.  I've been beating on them (heftily), there are there have been no
ill effects detected.  You can have my..
	Tested-by: Mike Galbraith <mgalbraith@xxxxxxx> ||
	Tested-by: Mike Galbraith <efault@xxxxxx> (the real /me)
..now fwiw, they were the deciding factor here.

> Could you also post the results without the two cpupri patches?

Sure, will do.  As noted, the cyclictest numbers were never as nasty as
the benchmark indicated they could (did) get.  With this particular test
app, there's there's a nasty feedback perturbation source, tty.  It can
feed on itself if several threads start griping.

While testing your patches, I just let it do it's thing with a ~full up
load it never could handle, and let the chips fall where they may.  The
cyclictest numbers I post will be 1:1 with the results posted, ie taking
tty out of the picture, so the difference won't be as huge as the lock
benchmark showed it can (did) get.

	-Mike

--
To unsubscribe from this list: send the line "unsubscribe linux-rt-users" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [RT Stable]     [Kernel Newbies]     [IDE]     [Security]     [Git]     [Netfilter]     [Bugtraq]     [Yosemite]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Linux ATA RAID]     [Samba]     [Video 4 Linux]     [Device Mapper]

  Powered by Linux