Re: [PATCH v2] intel_pstate: Take core C0 time into account for core busy calculation

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thu, Feb 20, 2014 at 01:44:24AM +0000, Stefan Lippers-Hollmann wrote:
> Hi
> 
> On Wednesday 19 February 2014, Greg KH wrote:
> > On Wed, Feb 19, 2014 at 10:37:26PM +0000, Stefan Lippers-Hollmann wrote:
> > > Hi
> > > 
> > > On Wednesday 19 February 2014, dirk.brandewie@xxxxxxxxx wrote:
> > > > From: Dirk Brandewie <dirk.j.brandewie@xxxxxxxxx>
> > > > 
> > > > Take non-idle time into account when calculating core busy time.
> > > > This ensures that intel_pstate will notice a decrease in load.
> > > > 
> > > > backport of commit: fcb6a15c2e7e76d493e6f91ea889ab40e1c643a4
> > > > Applies to v3.10.30, v3.12.11, v3.13.3
> > > > 
> > > > References: https://bugzilla.kernel.org/show_bug.cgi?id=66581
> > > > Cc: 3.10+ <stable@xxxxxxxxxxxxxxx> # 3.10+
> > > > Signed-off-by: Dirk Brandewie <dirk.j.brandewie@xxxxxxxxx>
> > > 
> > > After testing this patch, I can confirm that this revision of the patch
> > > no longer crashes the kernel (tested on sandy-bridge and ivy-bridge), 
> > > thanks a lot.
> > 
> > But does it slow it down?  My machine has horrible build times with this
> > patch installed on 3.14-rc2.  Reverted, my builds go back to normal
> > speeds (2 minutes instead of 8 minutes.)
> [...]
> 
> By the time of my first response, I had just booted a new kernel with v2
> of "intel_pstate: Take core C0 time into account for core busy 
> calculation" on a couple of systems, but hadn't done any benchmarks 
> yet. Now I've done some very preliminary benchmarks.
> 
> Benchmark environment:
> Building v3.14.4-rc1 with a very modular, distro-like, configuration 
> for both amd64 and i686 within a buildd environment (Debian/ sid, 
> pbuilder). These figures include the overhead of the buildd environment
> (unpacking the gzipped build chroot, installing the required build 
> dependencies (fetching from a local proxy server) into this chroot, 
> untarring (+ single threaded xz) the 3.13.0 tarball, applying the 
> patches (incl. patch-3.13.3+patch-3.13.4-rc1) and building the packages
> (using single threaded xz for the package components). The systems have 
> been freshly rebooted for each test and are otherwise close to idle, 
> the build concurrency has been set to 16.
> 
> system A (sandy-bridge):
> build host running 3.13.4-rc1 without any revision of this patch applied
> real    27m35.298s
> user    177m28.016s
> sys     13m54.164s
> 
> build host running 3.13.4-rc1 with revision 2 of this patch applied
> real    28m3.600s
> user    177m59.199s
> sys     14m11.025s
> 
> system B (ivy-bridge):
> build host running 3.13.4-rc1 without any revision of this patch applied
> real    24m51.323s
> user    156m0.813s
> sys     12m22.535s
> 
> build host running 3.13.4-rc1 with revision 2 of this patch applied
> real    25m20.189s
> user    157m44.803s
> sys     12m46.669s
> 
> Each test has only been done once, so I'm not calling this a scientific
> or representative benchmark yet, but the delta is similar on both 
> systems - slightly, but not massively, worse with this patch applied.

I've dropped this patch now, and am working with Dirk on the upstream
patch to try to get my machine to actually ramp up the cpu, as with the
patch, it seems stuck at the lowest speed.

You can watch the lkml thread with the:
	Subject: Commit fcb6a15c2e7e (intel_pstate: Take core C0 time into account for core busy calculation) sucks rocks

for the results of what is happenning here.

thanks,

greg k-h
--
To unsubscribe from this list: send the line "unsubscribe stable" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html




[Index of Archives]     [Linux Kernel]     [Kernel Development Newbies]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite Hiking]     [Linux Kernel]     [Linux SCSI]