Re: NO_HZ and cpu monitoring tools

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Is it somehow related to this patch:

http://kerneltrap.org/Linux/Load_Balancing_Cpusets?

Anyway, my personal guess is that since it is almost 100% idle, the
CPU never got the chance of switching over to other CPU, and so the
max is 2 CPU - cpu0 for the bootstrapping CPU, and another CPU as the
working CPU (in this case it just so happened to be cpu5).

BTW, what kind of fixes are u looking for?   There seems to be no
problem here, just a phenomena.   Alternatively, a random CPU can be
picked to run the process all the time, but that will incur high
overhead of switchover.   Alternatively, if you are looking for a fix,
the file is fs/proc/proc_misc.c - show_stat() function, I think.   May
be you can change the resolution there.

Anyway, as I read through the function move_tasks(), and
find_busiest_group() in sched.c, pertaining to CPU scheduling, there
seems to be No relationship with pageout operation, correct, Rik?

I thought there could be a relationship.   If the process group is
among the least busiest, then pageout operation should have a higher
chance of execution, correct, irregardless of whichever page it comes
from that particular process.   Logical?


On 10/15/07, Anton Blanchard <anton@xxxxxxxxx> wrote:
>
> Hi,
>
> When using a NO_HZ kernel on ppc64, I noticed top gives some interesting
> results:
>
> Cpu0  :  0.0%us,  0.0%sy,  0.0%ni,100.0%id,  0.0%wa,  0.0%hi,  0.0%si, 0.0%st
> Cpu1  :  0.0%us,  0.0%sy,  0.0%ni,  0.0%id,  0.0%wa,  0.0%hi,  0.0%si, 0.0%st
> Cpu2  :  0.0%us,  0.0%sy,  0.0%ni,  0.0%id,  0.0%wa,  0.0%hi,  0.0%si, 0.0%st
> Cpu3  :  0.0%us,  0.0%sy,  0.0%ni,  0.0%id,  0.0%wa,  0.0%hi,  0.0%si, 0.0%st
> Cpu4  :  0.0%us,  0.0%sy,  0.0%ni,  0.0%id,  0.0%wa,  0.0%hi,  0.0%si, 0.0%st
> Cpu5  :  1.1%us,  0.0%sy,  0.0%ni, 98.9%id,  0.0%wa,  0.0%hi,  0.0%si, 0.0%st
> Cpu6  :  0.0%us,  0.0%sy,  0.0%ni,  0.0%id,  0.0%wa,  0.0%hi,  0.0%si, 0.0%st
> Cpu7  :  0.0%us,  0.0%sy,  0.0%ni,  0.0%id,  0.0%wa,  0.0%hi,  0.0%si, 0.0%st
>
> Notice how only 2 cpus report idle time. Im guessing this happens if
> a core sleeps for longer than the update period in top. Where should
> this be fixed?
>
> It would be possible for the proc read method to add in the right number
> of idle jiffies, or top could just assume no increment means 100% idle.
>
> Anton
> -
> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> the body of a message to majordomo@xxxxxxxxxxxxxxx
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> Please read the FAQ at  http://www.tux.org/lkml/
>

--
To unsubscribe from this list: send an email with
"unsubscribe kernelnewbies" to ecartis@xxxxxxxxxxxx
Please read the FAQ at http://kernelnewbies.org/FAQ


[Index of Archives]     [Newbies FAQ]     [Linux Kernel Mentors]     [Linux Kernel Development]     [IETF Annouce]     [Git]     [Networking]     [Security]     [Bugtraq]     [Yosemite]     [MIPS Linux]     [ARM Linux]     [Linux RAID]     [Linux SCSI]     [Linux ACPI]
  Powered by Linux