On Tue, Aug 09, 2016 at 08:06:40PM +0000, Mathieu Desnoyers wrote: > >> +static int rseq_increment_event_counter(struct task_struct *t) > >> +{ > >> + if (__put_user(++t->rseq_event_counter, > >> + &t->rseq->u.e.event_counter)) > >> + return -1; > >> + return 0; > >> +} > >> +void __rseq_handle_notify_resume(struct pt_regs *regs) > >> +{ > >> + struct task_struct *t = current; > >> + > >> + if (unlikely(t->flags & PF_EXITING)) > >> + return; > >> + if (!access_ok(VERIFY_WRITE, t->rseq, sizeof(*t->rseq))) > >> + goto error; > >> + if (__put_user(raw_smp_processor_id(), &t->rseq->u.e.cpu_id)) > >> + goto error; > >> + if (rseq_increment_event_counter(t)) > > > > It seems a shame to not use a single __put_user() here. You did the > > layout to explicitly allow for this, but then you don't. > > The event counter increment needs to be performed at least once before > returning to user-space whenever the thread is preempted or has a signal > delivered. This counter increment needs to occur even if we are not nested > over a restartable assembly block. (more detailed explanation about this > follows at the end of this email) > > The rseq_ip_fixup only ever needs to update the rseq_cs pointer > field if it preempts/delivers a signal over a restartable > assembly block, which happens very rarely. > > Therefore, since the event counter increment is more frequent than > setting rseq_cs ptr, I don't see much value in trying to combine > those two into a single __put_user(). > > The reason why I combined both the cpu_id and event_counter > fields into the same 64-bit integer is for user-space rseq_start() > to be able to fetch them through a single load when the architecture > allows it. I wasn't talking about the rseq_up_fixup(), I was talking about both unconditional __put_user()'s on cpu_id and event_counter. These are 2 unconditinoal u32 stores that could very easily be done as a single u64 store (on 64bit hardware). -- To unsubscribe from this list: send the line "unsubscribe linux-api" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html