On Mon, Dec 06, 2021 at 12:32:22PM +0100, Peter Zijlstra wrote: > > Sorry, I haven't been feeling too well and as such procastinated on this > because thinking is required :/ Trying to pick up the bits. *sigh* and yet another week gone... someone was unhappy about refcount_t. > No, the failure case is different; umcg_notify_resume() will simply > block A until someone sets A::state == RUNNING and kicks it, which will > be no-one. > > Now, the above situation is actually simple to fix, but it gets more > interesting when we're using sys_umcg_wait() to build wait primitives. > Because in that case we get stuff like: > > for (;;) { > self->state = RUNNABLE; > smp_mb(); > if (cond) > break; > sys_umcg_wait(); > } > self->state = RUNNING; > > And we really need to not block and also not do sys_umcg_wait() early. > > So yes, I agree that we need a special case here that ensures > umcg_notify_resume() doesn't block. Let me ponder naming and comments. > Either a TF_COND_WAIT or a whole new state. I can't decide yet. > > Now, obviously if you do a random syscall anywhere around here, you get > to keep the pieces :-) Something like so I suppose.. --- a/include/uapi/linux/umcg.h +++ b/include/uapi/linux/umcg.h @@ -42,6 +42,32 @@ * */ #define UMCG_TF_PREEMPT 0x0100U +/* + * UMCG_TF_COND_WAIT: indicate the task *will* call sys_umcg_wait() + * + * Enables server loops like (vs umcg_sys_exit()): + * + * for(;;) { + * self->status = UMCG_TASK_RUNNABLE | UMCG_TF_COND_WAIT; + * // smp_mb() implied by xchg() + * + * runnable_ptr = xchg(self->runnable_workers_ptr, NULL); + * while (runnable_ptr) { + * next = runnable_ptr->runnable_workers_ptr; + * + * umcg_server_add_runnable(self, runnable_ptr); + * + * runnable_ptr = next; + * } + * + * self->next = umcg_server_pick_next(self); + * sys_umcg_wait(0, 0); + * } + * + * without a signal or interrupt in between setting umcg_task::state and + * sys_umcg_wait() resulting in an infinite wait in umcg_notify_resume(). + */ +#define UMCG_TF_COND_WAIT 0x0200U #define UMCG_TF_MASK 0xff00U --- a/kernel/sched/umcg.c +++ b/kernel/sched/umcg.c @@ -180,7 +180,7 @@ void umcg_worker_exit(void) /* * Do a state transition, @from -> @to, and possible read @next after that. * - * Will clear UMCG_TF_PREEMPT. + * Will clear UMCG_TF_PREEMPT, UMCG_TF_COND_WAIT. * * When @to == {BLOCKED,RUNNABLE}, update timestamps. * @@ -216,7 +216,8 @@ static int umcg_update_state(struct task if ((old & UMCG_TASK_MASK) != from) goto fail; - new = old & ~(UMCG_TASK_MASK | UMCG_TF_PREEMPT); + new = old & ~(UMCG_TASK_MASK | + UMCG_TF_PREEMPT | UMCG_TF_COND_WAIT); new |= to & UMCG_TASK_MASK; } while (!unsafe_try_cmpxchg_user(&self->state, &old, new, Efault)); @@ -567,11 +568,13 @@ void umcg_notify_resume(struct pt_regs * if (state == UMCG_TASK_RUNNING) goto done; - // XXX can get here when: - // - // self->state = RUNNABLE - // <signal> - // sys_umcg_wait(); + /* + * See comment at UMCG_TF_COND_WAIT; TL;DR: user *will* call + * sys_umcg_wait() and signals/interrupts shouldn't block + * return-to-user. + */ + if (state == UMCG_TASK_RUNNABLE | UMCG_TF_COND_WAIT) + goto done; if (state & UMCG_TF_PREEMPT) { if (umcg_pin_pages()) @@ -658,6 +661,13 @@ SYSCALL_DEFINE2(umcg_wait, u32, flags, u if (ret) goto unblock; + /* + * Clear UMCG_TF_COND_WAIT *and* check state == RUNNABLE. + */ + ret = umcg_update_state(self, tsk, UMCG_TASK_RUNNABLE, UMCG_TASK_RUNNABLE); + if (ret) + goto unpin; + if (worker) { ret = umcg_enqueue_runnable(tsk); if (ret)