Re: Information about some task_t members

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Saturday 31 January 2009 19:13:38 Andreas Leppert wrote:
> Hello,

Hi!

First off, which version of the kernel are you looking at?

> could you give me some information about these members of the
> task_struct structure in sched.h? What do they mean? I have written down
> what I do understand ... hope you can give me some help. Kernel reading
> isn't that easy, there are so less comments...

I know :) I guess once you understand it all, you trancend to guru-status and 
read through the source-code and see the reason behind (or above) it. At that 
point you do not need comments any more, and thereefore avoids it. At least, 
that's my theory.


A lot of my coments below is based on the most excellent book: Understanding 
the Linux kernel, 3rd ed. by Bovet and Cesati.

> sleep_avg
> it has something to do with how long a task sleeps...?!

This is the average sleep time for the process. It gives you some indication 
to how 'active' the task is.

AFAIK, this is not used in CFS and has been removed. 

> timestamp
> timestamp, but what should it indicate?

What does timestamps normally represent? This is the 'last modified' stamp for 
the process. (de)queueing etc. 

this has also been changed with CFS. The kernel now keeps a timestamp in the 
scheduling class for time of last arriving process and last time queued.

> last_ran
> time when the task has last run

More precisely, the time when *this* task was switched with another task. The 
time represent the time of the *switch*. 

> sched_time
> perhaps the time when this task was last scheduled...?

Cannot find this variable neither in current source, nor in UTLK. I *can* find 
sched_timer (the hrtimer to schedule the high-res timer-tick).

> time_slice
> length of a time slice?

Rest of the timeslice. Timeslice (I assume you're running some kernel with 
O(1)-scheduler, yes?) is computed based on
1) The static priority on the formula:
	basetime = (140-static_priority) * 20 (if stat_pri < 120)
	basetime = (140-static_priority) * 5   (else)
2) The dynamic priority, which takes into consideration how 'aggressive' the 
task is at hogging the CPU. The more CPU-bound, the lower the dyn_priority. 
The kernel divides tasks into batch and interactive.

I wrote a bit about this in my sched_comparison a while back
http://folk.ntnu.no/henrikau/sched/rt_sched_pro.pdf Sec. 2.4.1.1 to be 
precise. Might be of some help

> ncvsw
> the number of voluntary context switches...

404: Not Found

I do, however, find nvcsw. Typo? :-)

> nivcsw
> the number of involuntary context switches...

        /*
         * Cumulative resource counters for dead threads in the group,
         * and for reaped dead child processes forked by this group.
         * Live threads maintain their own counters and add to these
         * in __exit_signal, except for the group leader.
         */
unsigned long nvcsw, nivcsw, cnvcsw, cnivcsw;

nvcsw:
nivcsw:

>From Documentation/accounting/getdelays.c

void task_context_switch_counts(struct taskstats *t)
{
        printf("\n\nTask   %15s%15s\n"
               "       %15llu%15llu\n",
               "voluntary", "nonvoluntary",
               (unsigned long long)t->nvcsw, (unsigned long long)t->nivcsw);
}

So, looks like you were spot-on in your assumption :-)

cnvcsw:
cnivcsw:

Cannot find any sane doc, but from the source, it looks like it's cumulative 
counters for nvcsw/nivcsw collected from the task and the signal struct used 
when the task is traced.

> Thanks in advance!
> Andreas

HTH, 
henrik

--
To unsubscribe from this list: send an email with
"unsubscribe kernelnewbies" to ecartis@xxxxxxxxxxxx
Please read the FAQ at http://kernelnewbies.org/FAQ



[Index of Archives]     [Newbies FAQ]     [Linux Kernel Mentors]     [Linux Kernel Development]     [IETF Annouce]     [Git]     [Networking]     [Security]     [Bugtraq]     [Yosemite]     [MIPS Linux]     [ARM Linux]     [Linux RAID]     [Linux SCSI]     [Linux ACPI]
  Powered by Linux