On Tue, 2023-10-31 at 19:45 +0000, David Woodhouse wrote: > On Mon, 2023-10-30 at 15:50 +0000, David Woodhouse wrote: > > > > +static int do_monotonic(s64 *t, u64 *tsc_timestamp) > > +{ > > + struct pvclock_gtod_data *gtod = &pvclock_gtod_data; > > + unsigned long seq; > > + int mode; > > + u64 ns; > > + > > + do { > > + seq = read_seqcount_begin(>od->seq); > > + ns = gtod->clock.base_cycles; > > + ns += vgettsc(>od->clock, tsc_timestamp, &mode); > > + ns >>= gtod->clock.shift; > > + ns += ktime_to_ns(ktime_add(gtod->clock.offset, gtod->offs_boot)); > > + } while (unlikely(read_seqcount_retry(>od->seq, seq))); > > + *t = ns; > > + > > + return mode; > > +} > > + > > Hrm, that's basically cargo-culted from do_monotonic_raw() immediately > above it. Should it be adding gtod->offs_boot? > > Empirically the answer would appear to be 'no'. When gtod->offs_boot is > non-zero, I see kvm_get_monotonic_and_clockread() returning values > which are precisely that far in advance of what ktime_get() reports. .... because the do_monotonic_raw() function, despite the simple clarity of its name... doesn't actually return the CLOCK_MONOTONIC_RAW time. Of course it doesn't. Why would a function with that name return the MONOTONIC_RAW clock? It actually returns the same as get_kvmclock_base_ns(), which is /* Count up from boot time, but with the frequency of the raw clock. */ return ktime_to_ns(ktime_add(ktime_get_raw(), pvclock_gtod_data.offs_boot)); I feel that Grey's Law is starting to apply to this clock stuff. This is starting to be indistinguishable from malice ;)
Attachment:
smime.p7s
Description: S/MIME cryptographic signature