Re: [PATCH] Documentation: Remove misleading examples of the barriers in wake_*()

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Mon, Sep 21, 2015 at 07:46:11PM +0200, Oleg Nesterov wrote:
> On 09/18, Peter Zijlstra wrote:
> >
> > the text is correct, right?
> 
> Yes, it looks good to me and helpful.
> 
> But damn. I forgot why exactly try_to_wake_up() needs rmb() after
> ->on_cpu check... It looks reasonable in any case, but I do not
> see any strong reason immediately.

I read it like the smp_rmb() we have for
acquire__after_spin_is_unlocked. Except, as you note below, we need to
need an smp_read_barrier_depends for control barriers as well....

(I'm starting to think we're having more control deps what we were
thinking...)

--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -1947,7 +1947,13 @@ try_to_wake_up(struct task_struct *p, un
 	while (p->on_cpu)
 		cpu_relax();
 	/*
-	 * Pairs with the smp_wmb() in finish_lock_switch().
+	 * Combined with the control dependency above, we have an effective
+	 * smp_load_acquire() without the need for full barriers.
+	 *
+	 * Pairs with the smp_store_release() in finish_lock_switch().
+	 *
+	 * This ensures that tasks getting woken will be fully ordered against
+	 * their previous state and preserve Program Order.
 	 */
 	smp_rmb();
 
--- a/kernel/sched/sched.h
+++ b/kernel/sched/sched.h
@@ -1073,6 +1073,9 @@ static inline void finish_lock_switch(st
 	 * We must ensure this doesn't happen until the switch is completely
 	 * finished.
 	 *
+	 * In particular, the load of prev->state in finish_task_switch() must
+	 * happen before this.
+	 *
 	 * Pairs with the control dependency and rmb in try_to_wake_up().
 	 */
 	smp_store_release(&prev->on_cpu, 0);


Updates the comments to clarify the release/acquire pair on p->on_cpu.

> Say,
> 
> 	p->sched_contributes_to_load = !!task_contributes_to_load(p);
> 	p->state = TASK_WAKING;
> 
> we can actually do this before "while (p->on_cpu)", afaics. However
> we must not do this before the previous p->on_rq check.

No, we must not touch the task before p->on_cpu is cleared, up until
that point the task is owned by the 'previous' CPU.

> So perhaps this rmb() helps to ensure task_contributes_to_load() can't
> happen before p->on_rq check...
> 
> As for "p->state = TASK_WAKING" we have the control dependency in both
> cases. But the modern fashion suggests to use _CTRL().

Yes, but I'm not sure we should go write:

	while (READ_ONCE_CTRL(p->on_cpu))
		cpu_relax();

Or:

	while (p->on_cpu)
		cpu_relax();

	smp_read_barrier_depends();

It seems to me that doing the smp_mb() (for Alpha) inside the loop might
be sub-optimal.

That said, it would be good if Paul (or anyone really) can explain to me
the reason for: 5af4692a75da ("smp: Make control dependencies work on
Alpha, improve documentation"). The Changelog simply states that Alpha
needs the mb, but not how/why etc.

> Although cpu_relax()
> should imply barrier(), but afaik this is not documented.

I think we're relying on that in many places..

> In short, I got lost ;) Now I don't even understand why we do not need
> another rmb() between p->on_rq and p->on_cpu. Suppose a thread T does
> 
> 	set_current_state(...);
> 	schedule();
> 
> it can be preempted in between, after that we have "on_rq && !on_cpu".
> Then it gets CPU again and calls schedule() which clears on_rq.
> 
> What guarantees that if ttwu() sees on_rq == 0 cleared by schedule()
> then it can _not_ still see the old value of on_cpu == 0?

Right, let me go have a think about that ;-)
--
To unsubscribe from this list: send the line "unsubscribe linux-doc" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html



[Index of Archives]     [Kernel Newbies]     [Security]     [Netfilter]     [Bugtraq]     [Linux FS]     [Yosemite Forum]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Samba]     [Video 4 Linux]     [Device Mapper]     [Linux Resources]

  Powered by Linux