Re: [PATCH 6/7] numa,sched: normalize faults_from stats and weigh by CPU use

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 01/20/2014 11:57 AM, Peter Zijlstra wrote:
> On Fri, Jan 17, 2014 at 04:12:08PM -0500, riel@xxxxxxxxxx wrote:
>> diff --git a/include/linux/sched.h b/include/linux/sched.h
>> index 0af6c1a..52de567 100644
>> --- a/include/linux/sched.h
>> +++ b/include/linux/sched.h
>> @@ -1471,6 +1471,8 @@ struct task_struct {
>>  	int numa_preferred_nid;
>>  	unsigned long numa_migrate_retry;
>>  	u64 node_stamp;			/* migration stamp  */
>> +	u64 last_task_numa_placement;
>> +	u64 last_sum_exec_runtime;
>>  	struct callback_head numa_work;
>>  
>>  	struct list_head numa_entry;
> 
>> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
>> index 8e0a53a..0d395a0 100644
>> --- a/kernel/sched/fair.c
>> +++ b/kernel/sched/fair.c
>> @@ -1422,11 +1422,41 @@ static void update_task_scan_period(struct task_struct *p,
>>  	memset(p->numa_faults_locality, 0, sizeof(p->numa_faults_locality));
>>  }
>>  
>> +/*
>> + * Get the fraction of time the task has been running since the last
>> + * NUMA placement cycle. The scheduler keeps similar statistics, but
>> + * decays those on a 32ms period, which is orders of magnitude off
>> + * from the dozens-of-seconds NUMA balancing period. Use the scheduler
>> + * stats only if the task is so new there are no NUMA statistics yet.
>> + */
>> +static u64 numa_get_avg_runtime(struct task_struct *p, u64 *period)
>> +{
>> +	u64 runtime, delta, now;
>> +	/* Use the start of this time slice to avoid calculations. */
>> +	now = p->se.exec_start;
>> +	runtime = p->se.sum_exec_runtime;
>> +
>> +	if (p->last_task_numa_placement) {
>> +		delta = runtime - p->last_sum_exec_runtime;
>> +		*period = now - p->last_task_numa_placement;
>> +	} else {
>> +		delta = p->se.avg.runnable_avg_sum;
>> +		*period = p->se.avg.runnable_avg_period;
>> +	}
>> +
>> +	p->last_sum_exec_runtime = runtime;
>> +	p->last_task_numa_placement = now;
>> +
>> +	return delta;
>> +}
> 
> Have you tried what happens if you use p->se.avg.runnable_avg_sum /
> p->se.avg.runnable_avg_period instead? If that also works it avoids
> growing the datastructures and keeping of yet another set of runtime
> stats.

That is what I started out with, and the results were not
as stable as with this calculation.

Having said that, I did that before I came up with patch 7/7,
so maybe the effect would no longer be as pronounced any more
as it was before...

I can send in a simplified version, if you prefer.

-- 
All rights reversed

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@xxxxxxxxx.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@xxxxxxxxx";> email@xxxxxxxxx </a>




[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux]     [Linux OMAP]     [Linux MIPS]     [ECOS]     [Asterisk Internet PBX]     [Linux API]