[+Marc for the arch timer] On Fri, Feb 08, 2019 at 08:30:25PM +0100, Thomas Gleixner wrote: > On Fri, 8 Feb 2019, Thomas Gleixner wrote: > > On Fri, 8 Feb 2019, Will Deacon wrote: > > > On Fri, Dec 07, 2018 at 05:53:21PM +0000, Will Deacon wrote: > > > > Anyway, moving the counter read into the protected region is a little fiddly > > > > because the memory barriers we have in there won't give us the ordering we > > > > need. We'll instead need to do something nasty, like create a dependency > > > > from the counter read to the read of the seqlock: > > > > > > > > Maybe the untested crufty hack below, although this will be a nightmare to > > > > implement in C. > > How is the in kernel ktime_get() correctness guaranteed then? Luck. I think we'll have to introduce a dummy dependent stack read into our counter accessor so that it's ordered by the smp_rmb(). Example diff below, which I'll roll into a proper patch series later on. Will --->8 diff --git a/arch/arm64/include/asm/arch_timer.h b/arch/arm64/include/asm/arch_timer.h index f2a234d6516c..bd55b4373700 100644 --- a/arch/arm64/include/asm/arch_timer.h +++ b/arch/arm64/include/asm/arch_timer.h @@ -150,8 +150,24 @@ static inline void arch_timer_set_cntkctl(u32 cntkctl) static inline u64 arch_counter_get_cntpct(void) { + u64 cnt, tmp; + isb(); - return arch_timer_reg_read_stable(cntpct_el0); + cnt = arch_timer_reg_read_stable(cntpct_el0); + + /* + * This insanity brought to you by speculative, out-of-order system + * register reads, sequence locks and Thomas Gleixner. + * + * http://lists.infradead.org/pipermail/linux-arm-kernel/2019-February/631195.html + */ + asm volatile("eor %0, %1, %1\n" + "add %0, sp, %0\n" + "ldr xzr, [%0]" + : "=r" (tmp) + : "r" (cnt) + : "memory"); + return cnt; } static inline u64 arch_counter_get_cntvct(void)