On 9 March 2015 at 19:03, stan <stanl-fedorauser@xxxxxxxxxxx> wrote: > I'm running Fedora 21 with a custom compiled kernel, > 3.19.0-1.20150211.fc21.x86_64. > > I have a multi core system with 6 cores. All are recognized by the > kernel. > > But, when I run a compile job with -j6, in order to allow all six cores > to be used, it limits the total amount of usage to 100% of a *single* > core. So, it might use all six cores, but the sum of the percentages > on those six cores is always around 100% of one core. This is from > htop output. > > On large compilations, like the kernel or firefox, even using 4 cores > could drastically reduce compile time. > > I've looked at /etc/security/limits.conf, but it doesn't seem to have a > setting for this. I've also looked at the /proc system to see if there > is a kernel variable, though that seems unlikely, with no luck. Online > searching found ways to limit the amount that a single job can get, but > not how to set this for a user. There must be a configuration variable > somewhere that is limiting the amount of total cpu a user can use. But > I can't find it. > > Can anyone help? Been wondering about this thread for a while, as I use make -j N since some builds I've had to deal with (including the kernel) have pretty much shut down the machines they're running on if allowed to run in unrestricted -j mode. Some people have said that this is priority related, that's not the case. I can run "make -j10" here (dcmtk-3.6.1 to test if anyone wants to know) and see multiple ccplus going up to 100% at points, RHEL 6, a 2.6.3 kernel. What I do see during that process though is that early on multiple jobs run at less than 100% and maybe at approximately 100%/N, this may be due to how make starts parallel jobs, or it may simple be I/O or other non-CPU limiting on the compilation. You may want to check the .NOTPARALLEL directive is not present http://www.gnu.org/software/make/manual/make.html#Parallel though I think that would simply prevent multiple processes. To repeat, make -j N should be able to start N processes and they should not be subject to an overall limit other than hardware. (Incidentally, one process can use more than 100% if written to use parallelisation, you can often see jvm doing this.) Since I can't reproduce this problem I'm not sure what's causing it. If you really are finding make subprocesses limited to 100% cpu across the lot then maybe have a look to see if there are any cgroups limits active https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/6/html/Resource_Management_Guide/ch01.html may also be worth running on the stock fedora kernel to test that it's not something that you've turned on in your custom kernel. Like I mentioned above, -j without N I've found can really make things drag on heavy builds. Even if you don't care about running other things at the same time, you can often get better a faster build by choosing a good N, as too many processes at once compete for other resources and run inefficiently. Things like hyperthreading can compound this. The only time I've done that is building the kernel on a dual core intel machine (no hyperthreading) and N=3 did turn out to be fastest, but that 50% rule may not always be the case. With hyperthreading present I've found with other processing tasks that pushing above 50% total system load (i.e. more than N*100%, due to virtual cores being counted) can actually slow down the overall task noticeably. -- imalone http://ibmalone.blogspot.co.uk -- users mailing list users@xxxxxxxxxxxxxxxxxxxxxxx To unsubscribe or change subscription options: https://admin.fedoraproject.org/mailman/listinfo/users Fedora Code of Conduct: http://fedoraproject.org/code-of-conduct Guidelines: http://fedoraproject.org/wiki/Mailing_list_guidelines Have a question? Ask away: http://ask.fedoraproject.org