Hello, Frederic! KCSAN complains about the following when augmented by Marco's latest patch series: [ 15.432187] ================================================================== [ 15.440802] BUG: KCSAN: data-race in rcu_nocb_cb_kthread / rcu_nocb_gp_kthread [ 15.441715] [ 15.441895] read (marked) to 0xffff8a05df5acb50 of 1 bytes by task 153 on cpu 7: [ 15.443781] rcu_nocb_gp_kthread+0x237/0x1180 [ 15.444272] kthread+0x29b/0x2b0 [ 15.444617] ret_from_fork+0x22/0x30 [ 15.445123] [ 15.445280] no locks held by rcuog/12/153. [ 15.445694] irq event stamp: 7379 [ 15.446063] hardirqs last enabled at (7379): [<ffffffffa8b1b23a>] _raw_spin_unlock_irqrestore+0x3a/0x70 [ 15.447870] hardirqs last disabled at (7378): [<ffffffffa75b14c2>] rcu_nocb_gp_kthread+0x2d2/0x1180 [ 15.449478] softirqs last enabled at (7232): [<ffffffffa74bf844>] __irq_exit_rcu+0x64/0xc0 [ 15.451430] softirqs last disabled at (7225): [<ffffffffa74bf844>] __irq_exit_rcu+0x64/0xc0 [ 15.452259] [ 15.452418] write to 0xffff8a05df5acb50 of 1 bytes by task 169 on cpu 10: [ 15.454395] rcu_nocb_cb_kthread+0x4b0/0x760 [ 15.454835] kthread+0x29b/0x2b0 [ 15.458271] ret_from_fork+0x22/0x30 [ 15.458657] [ 15.458817] 1 lock held by rcuop/14/169: [ 15.459220] #0: ffff8a05df5acc70 (&rdp->nocb_lock){-.-.}-{2:2}, at: rcu_nocb_cb_kthread+0x2ff/0x760 [ 15.460127] irq event stamp: 62 [ 15.460441] hardirqs last enabled at (61): [<ffffffffa74bf40a>] __local_bh_enable_ip+0xca/0x120 [ 15.461305] hardirqs last disabled at (62): [<ffffffffa75b2657>] rcu_nocb_cb_kthread+0x2e7/0x760 [ 15.462169] softirqs last enabled at (60): [<ffffffffa75adbed>] local_bh_enable+0xd/0x30 [ 15.462973] softirqs last disabled at (58): [<ffffffffa75ad35d>] local_bh_disable+0xd/0x30 And gdb fingers these two accesses: (gdb) l*rcu_nocb_gp_kthread+0x237 0xffffffff811b1427 is in rcu_nocb_gp_kthread (kernel/rcu/rcu_segcblist.h:71). 66 } 67 68 static inline bool rcu_segcblist_test_flags(struct rcu_segcblist *rsclp, 69 int flags) 70 { 71 return READ_ONCE(rsclp->flags) & flags; 72 } 73 74 /* 75 * Is the specified rcu_segcblist enabled, for example, not corresponding (gdb) l*rcu_nocb_cb_kthread+0x4b0 0xffffffff811b2820 is in rcu_nocb_cb_kthread (kernel/rcu/rcu_segcblist.h:59). 54 } 55 56 static inline void rcu_segcblist_set_flags(struct rcu_segcblist *rsclp, 57 int flags) 58 { 59 rsclp->flags |= flags; 60 } 61 62 static inline void rcu_segcblist_clear_flags(struct rcu_segcblist *rsclp, 63 int flags) Any reason not to turn that "rsclp->flags |= flags" into a WRITE_ONCE()? Maybe a READ_ONCE() as well, if multiple CPUs can be updating this field (but I hope not!). This also found the following rcutorture data race that I will be beating my head against. ;-) Thanx, Paul ------------------------------------------------------------------------ (gdb) l*rcu_torture_fwd_prog+0x5ee 0xffffffff81194fbe is in rcu_torture_fwd_prog (kernel/rcu/rcutorture.c:2386). 2381 !shutdown_time_arrived() && 2382 !READ_ONCE(rcu_fwd_emergency_stop) && !torture_must_stop()) { 2383 rfcp = READ_ONCE(rfp->rcu_fwd_cb_head); 2384 rfcpn = NULL; 2385 if (rfcp) 2386 rfcpn = READ_ONCE(rfcp->rfc_next); 2387 if (rfcpn) { 2388 if (rfcp->rfc_gps >= MIN_FWD_CB_LAUNDERS && 2389 ++n_max_gps >= MIN_FWD_CBS_LAUNDERED) 2390 break; (gdb) l*rcu_torture_fwd_cb_cr+0x3d 0xffffffff81195e8d is in rcu_torture_fwd_cb_cr (kernel/rcu/rcutorture.c:2211). 2206 int i; 2207 struct rcu_fwd_cb *rfcp = container_of(rhp, struct rcu_fwd_cb, rh); 2208 struct rcu_fwd_cb **rfcpp; 2209 struct rcu_fwd *rfp = rfcp->rfc_rfp; 2210 2211 rfcp->rfc_next = NULL; 2212 rfcp->rfc_gps++; 2213 spin_lock_irqsave(&rfp->rcu_fwd_lock, flags); 2214 rfcpp = rfp->rcu_fwd_cb_tail; 2215 rfp->rcu_fwd_cb_tail = &rfcp->rfc_next;