On Thu, 14 Apr 2022 11:18:03 +0200 Kurt Kanzenbach <kurt@xxxxxxxxxxxxx> wrote: I finally ran this series through my tests, and it has some issues. > Introduce fast/NMI safe accessor to clock tai for tracing. The Linux kernel > tracing infrastructure has support for using different clocks to generate > timestamps for trace events. Especially in TSN networks it's useful to have TAI > as trace clock, because the application scheduling is done in accordance to the > network time, which is based on TAI. With a tai trace_clock in place, it becomes > very convenient to correlate network activity with Linux kernel application > traces. > > Use the same implementation as ktime_get_boot_fast_ns() does by reading the > monotonic time and adding the TAI offset. The same limitations as for the fast > boot implementation apply. The TAI offset may change at run time e.g., by > setting the time or using adjtimex() with an offset. However, these kind of > offset changes are rare events. Nevertheless, the user has to be aware and deal > with it in post processing. > > An alternative approach would be to use the same implementation as > ktime_get_real_fast_ns() does. However, this requires to add an additional u64 > member to the tk_read_base struct. This struct together with a seqcount is > designed to fit into a single cache line on 64 bit architectures. Adding a new > member would violate this constraint. > > Signed-off-by: Kurt Kanzenbach <kurt@xxxxxxxxxxxxx> > --- > Documentation/core-api/timekeeping.rst | 1 + > include/linux/timekeeping.h | 1 + > kernel/time/timekeeping.c | 17 +++++++++++++++++ > 3 files changed, 19 insertions(+) > > diff --git a/Documentation/core-api/timekeeping.rst b/Documentation/core-api/timekeeping.rst > index 729e24864fe7..22ec68f24421 100644 > --- a/Documentation/core-api/timekeeping.rst > +++ b/Documentation/core-api/timekeeping.rst > @@ -132,6 +132,7 @@ Some additional variants exist for more specialized cases: > .. c:function:: u64 ktime_get_mono_fast_ns( void ) > u64 ktime_get_raw_fast_ns( void ) > u64 ktime_get_boot_fast_ns( void ) > + u64 ktime_get_tai_fast_ns( void ) > u64 ktime_get_real_fast_ns( void ) > > These variants are safe to call from any context, including from > diff --git a/include/linux/timekeeping.h b/include/linux/timekeeping.h > index 78a98bdff76d..fe1e467ba046 100644 > --- a/include/linux/timekeeping.h > +++ b/include/linux/timekeeping.h > @@ -177,6 +177,7 @@ static inline u64 ktime_get_raw_ns(void) > extern u64 ktime_get_mono_fast_ns(void); > extern u64 ktime_get_raw_fast_ns(void); > extern u64 ktime_get_boot_fast_ns(void); > +extern u64 ktime_get_tai_fast_ns(void); > extern u64 ktime_get_real_fast_ns(void); > > /* > diff --git a/kernel/time/timekeeping.c b/kernel/time/timekeeping.c > index dcdcb85121e4..2c22023fbf5f 100644 > --- a/kernel/time/timekeeping.c > +++ b/kernel/time/timekeeping.c > @@ -532,6 +532,23 @@ u64 notrace ktime_get_boot_fast_ns(void) > } > EXPORT_SYMBOL_GPL(ktime_get_boot_fast_ns); > > +/** > + * ktime_get_tai_fast_ns - NMI safe and fast access to tai clock. > + * > + * The same limitations as described for ktime_get_boot_fast_ns() apply. The > + * mono time and the TAI offset are not read atomically which may yield wrong > + * readouts. However, an update of the TAI offset is an rare event e.g., caused > + * by settime or adjtimex with an offset. The user of this function has to deal > + * with the possibility of wrong timestamps in post processing. > + */ > +u64 notrace ktime_get_tai_fast_ns(void) > +{ > + struct timekeeper *tk = &tk_core.timekeeper; > + > + return (ktime_get_mono_fast_ns() + ktime_to_ns(data_race(tk->offs_tai))); As you are using this for tracing, can you open code the ktime_get_mono_fast_ns(), otherwise we need to mark that function as notrace. Not to mention, this is a fast path and using the noinline of __ktime_get_fast_ns() should be less overhead. That said, I hit this too: less-5071 [000] d.h2. 498087876.351330: do_raw_spin_trylock <-_raw_spin_lock less-5071 [000] d.h4. 498087876.351334: ktime_get_mono_fast_ns <-ktime_get_tai_fast_ns less-5071 [000] d.h5. 498087876.351334: ktime_get_mono_fast_ns <-ktime_get_tai_fast_ns less-5071 [000] d.h3. 498087876.351334: rcu_read_lock_sched_held <-lock_acquired less-5071 [000] d.h5. 498087876.351337: ktime_get_mono_fast_ns <-ktime_get_tai_fast_ns kworker/u8:1-45 [003] d.h7. 1651009380.982749: ktime_get_mono_fast_ns <-ktime_get_tai_fast_ns kworker/u8:1-45 [003] d.h7. 1651009380.982749: ktime_get_mono_fast_ns <-ktime_get_tai_fast_ns kworker/u8:1-45 [003] d.h5. 1651009380.982749: rcu_read_lock_held_common <-rcu_read_lock_sched_held kworker/u8:1-45 [003] d.h7. 498087876.375905: ktime_get_mono_fast_ns <-ktime_get_tai_fast_ns kworker/u8:1-45 [003] d.h7. 498087876.375905: ktime_get_mono_fast_ns <-ktime_get_tai_fast_ns kworker/u8:1-45 [003] d.h5. 498087876.375905: update_cfs_group <-task_tick_fair kworker/u8:1-45 [003] d.h7. 498087876.375909: ktime_get_mono_fast_ns <-ktime_get_tai_fast_ns The clock seems to be toggling between 1651009380 and 498087876 causing the ftrace ring buffer to shutdown (it doesn't allow for time to go backwards). This is running on a 32 bit x86. -- Steve > +} > +EXPORT_SYMBOL_GPL(ktime_get_tai_fast_ns); > + > static __always_inline u64 __ktime_get_real_fast(struct tk_fast *tkf, u64 *mono) > { > struct tk_read_base *tkr;