Re: [PATCH v2] sched/core: Preempt current task in favour of bound kthread

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Tue, Dec 10, 2019 at 11:13:30AM +0530, Srikar Dronamraju wrote:
> diff --git a/kernel/sched/core.c b/kernel/sched/core.c
> index 44123b4d14e8..82126cbf62cd 100644
> --- a/kernel/sched/core.c
> +++ b/kernel/sched/core.c
> @@ -2664,7 +2664,12 @@ try_to_wake_up(struct task_struct *p, unsigned int state, int wake_flags)
>   */
>  int wake_up_process(struct task_struct *p)
>  {
> -	return try_to_wake_up(p, TASK_NORMAL, 0);
> +	int wake_flags = 0;
> +
> +	if (is_per_cpu_kthread(p))
> +		wake_flags = WF_KTHREAD;
> +
> +	return try_to_wake_up(p, TASK_NORMAL, wake_flags);
>  }
>  EXPORT_SYMBOL(wake_up_process);

Why wake_up_process() and not try_to_wake_up() ? This way
wake_up_state(.state = TASK_NORMAL() is no longer the same as
wake_up_process(), and that's weird!

> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> index 69a81a5709ff..36486f71e59f 100644
> --- a/kernel/sched/fair.c
> +++ b/kernel/sched/fair.c
> @@ -6660,6 +6660,27 @@ static void set_skip_buddy(struct sched_entity *se)
>  		cfs_rq_of(se)->skip = se;
>  }
>  
> +static int kthread_wakeup_preempt(struct rq *rq, struct task_struct *p, int wake_flags)
> +{
> +	struct task_struct *curr = rq->curr;
> +	struct cfs_rq *cfs_rq = task_cfs_rq(curr);
> +
> +	if (!(wake_flags & WF_KTHREAD))
> +		return 0;
> +
> +	if (p->nr_cpus_allowed != 1 || curr->nr_cpus_allowed == 1)
> +		return 0;

Per the above, WF_KTHREAD already implies p->nr_cpus_allowed == 1.

> +	if (cfs_rq->nr_running > 2)
> +		return 0;
> +
> +	/*
> +	 * Don't preempt, if the waking kthread is more CPU intensive than
> +	 * the current thread.
> +	 */
> +	return p->nvcsw * curr->nivcsw >= p->nivcsw * curr->nvcsw;

Both these conditions seem somewhat arbitrary. The number of context
switch does not correspond to CPU usage _at_all_.

vtime OTOH does reflect exactly that, if it runs a lot, it will be ahead
in the tree.

> +}
> +
>  /*
>   * Preempt the current task with a newly woken task if needed:
>   */
> @@ -6716,7 +6737,7 @@ static void check_preempt_wakeup(struct rq *rq, struct task_struct *p, int wake_
>  	find_matching_se(&se, &pse);
>  	update_curr(cfs_rq_of(se));
>  	BUG_ON(!pse);
> -	if (wakeup_preempt_entity(se, pse) == 1) {
> +	if (wakeup_preempt_entity(se, pse) == 1 || kthread_wakeup_preempt(rq, p, wake_flags)) {
>  		/*
>  		 * Bias pick_next to pick the sched entity that is
>  		 * triggering this preemption.

How about something like:

	if (wakeup_preempt_entity(se, pse) >= 1-!!(wake_flags & WF_KTHREAD))

instead? Then we allow kthreads a little more preemption room.



[Index of Archives]     [Linux Ext4 Filesystem]     [Union Filesystem]     [Filesystem Testing]     [Ceph Users]     [Ecryptfs]     [AutoFS]     [Kernel Newbies]     [Share Photos]     [Security]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux Cachefs]     [Reiser Filesystem]     [Linux RAID]     [Samba]     [Device Mapper]     [CEPH Development]

  Powered by Linux