The mainline implementation of read_seqbegin() orders prior loads w.r.t. the read-side critical section. Fixup the RT writer-boosting implementation to provide the same guarantee. Also, while we're here, update the usage of ACCESS_ONCE() to use READ_ONCE(). Fixes: e69f15cf77c23 ("seqlock: Prevent rt starvation") Cc: stable-rt@xxxxxxxxxxxxxxx Signed-off-by: Julia Cartwright <julia@xxxxxx> --- Found during code inspection of the RT seqlock implementation. Julia include/linux/seqlock.h | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/include/linux/seqlock.h b/include/linux/seqlock.h index a59751276b94..597ce5a9e013 100644 --- a/include/linux/seqlock.h +++ b/include/linux/seqlock.h @@ -453,7 +453,7 @@ static inline unsigned read_seqbegin(seqlock_t *sl) unsigned ret; repeat: - ret = ACCESS_ONCE(sl->seqcount.sequence); + ret = READ_ONCE(sl->seqcount.sequence); if (unlikely(ret & 1)) { /* * Take the lock and let the writer proceed (i.e. evtl @@ -462,6 +462,7 @@ static inline unsigned read_seqbegin(seqlock_t *sl) spin_unlock_wait(&sl->lock); goto repeat; } + smp_rmb(); return ret; } #endif -- 2.16.1 -- To unsubscribe from this list: send the line "unsubscribe stable-rt" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html