On Tue, Oct 31, 2023 at 10:52:02AM +0100, Peter Zijlstra wrote: > On Mon, Oct 30, 2023 at 01:11:41PM -0700, Paul E. McKenney wrote: > > On Mon, Oct 30, 2023 at 09:21:38AM +0100, Peter Zijlstra wrote: > > > On Fri, Oct 27, 2023 at 04:41:30PM -0700, Paul E. McKenney wrote: > > > > On Sat, Oct 28, 2023 at 12:46:28AM +0200, Peter Zijlstra wrote: > > > > > > > > Nah, this is more or less what I feared. I just worry people will come > > > > > around and put WRITE_ONCE() on the other end. I don't think that'll buy > > > > > us much. Nor do I think the current READ_ONCE()s actually matter. > > > > > > > > My friend, you trust compilers more than I ever will. ;-) > > > > > > Well, we only use the values {0,1,2}, that's contained in the first > > > byte. Are we saying compiler will not only byte-split but also > > > bit-split the loads? > > > > > > But again, lacking the WRITE_ONCE() counterpart, this READ_ONCE() isn't > > > getting you anything, and if you really worried about it, shouldn't you > > > have proposed a patch making it all WRITE_ONCE() back when you did this > > > tasks-rcu stuff? > > > > There are not all that many of them. If such a WRITE_ONCE() patch would > > be welcome, I would be happy to put it together. > > > > > > > But perhaps put a comment there, that we don't care for the races and > > > > > only need to observe a 0 once or something. > > > > > > > > There are these two passagers in the big lock comment preceding the > > > > RCU Tasks code: > > > > > > > // rcu_tasks_pregp_step(): > > > > // Invokes synchronize_rcu() in order to wait for all in-flight > > > > // t->on_rq and t->nvcsw transitions to complete. This works because > > > > // all such transitions are carried out with interrupts disabled. > > > > > > > Does that suffice, or should we add more? > > > > > > Probably sufficient. If one were to have used the search option :-) > > > > > > Anyway, this brings me to nvcsw, exact same problem there, except > > > possibly worse, because now we actually do care about the full word. > > > > > > No WRITE_ONCE() write side, so the READ_ONCE() don't help against > > > store-tearing (however unlikely that actually is in this case). > > > > Again, if such a WRITE_ONCE() patch would be welcome, I would be happy > > to put it together. > > Welcome is not the right word. What bugs me most is that this was never > raised when this code was written :/ Me, I consider those READ_ONCE() calls to be documentation as well as defense against overly enthusiastic optimizers. "This access is racy." > Mostly my problem is that GCC generates such utter shite when you > mention volatile. See, the below patch changes the perfectly fine and > non-broken: > > 0148 1d8: 49 83 06 01 addq $0x1,(%r14) > > into: > > 0148 1d8: 49 8b 06 mov (%r14),%rax > 014b 1db: 48 83 c0 01 add $0x1,%rax > 014f 1df: 49 89 06 mov %rax,(%r14) > > For absolutely no reason :-( > > At least clang doesn't do this, it stays: > > 0403 413: 49 ff 45 00 incq 0x0(%r13) > > irrespective of the volatile. Sounds like a bug in GCC, perhaps depending on the microarchitecture in question. And it was in fact reported in the past, but closed as not-a-bug. Perhaps clang's fix for this will help GCC along. And yes, I do see that ++*switch_count in __schedule(). So, at least until GCC catches up to clang's code generation, I take it that you don't want WRITE_ONCE() for that ->nvcsw increment. Thoughts on ->on_rq? Thanx, Paul > --- > diff --git a/kernel/sched/core.c b/kernel/sched/core.c > index 802551e0009b..d616211b9151 100644 > --- a/kernel/sched/core.c > +++ b/kernel/sched/core.c > @@ -6575,8 +6575,8 @@ pick_next_task(struct rq *rq, struct task_struct *prev, struct rq_flags *rf) > */ > static void __sched notrace __schedule(unsigned int sched_mode) > { > struct task_struct *prev, *next; > - unsigned long *switch_count; > + volatile unsigned long *switch_count; > unsigned long prev_state; > struct rq_flags rf; > struct rq *rq;