Re: [PATCH bpf-next v7 2/4] bpf: add bpf_cpu_cycles_to_ns helper

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 20/11/2024 00:49, Peter Zijlstra wrote:
On Tue, Nov 19, 2024 at 06:38:51AM -0800, Vadim Fedorenko wrote:
On 19/11/2024 03:28, Peter Zijlstra wrote:
On Mon, Nov 18, 2024 at 10:52:43AM -0800, Vadim Fedorenko wrote:

+			if (insn->src_reg == BPF_PSEUDO_KFUNC_CALL &&
+			    imm32 == BPF_CALL_IMM(bpf_cpu_cycles_to_ns) &&
+			    cpu_feature_enabled(X86_FEATURE_CONSTANT_TSC)) {
+				u32 mult, shift;
+
+				clocks_calc_mult_shift(&mult, &shift, tsc_khz, USEC_PER_SEC, 0);
+				/* imul RAX, RDI, mult */
+				maybe_emit_mod(&prog, BPF_REG_1, BPF_REG_0, true);
+				EMIT2_off32(0x69, add_2reg(0xC0, BPF_REG_1, BPF_REG_0),
+					    mult);
+
+				/* shr RAX, shift (which is less than 64) */
+				maybe_emit_1mod(&prog, BPF_REG_0, true);
+				EMIT3(0xC1, add_1reg(0xE8, BPF_REG_0), shift);
+
+				break;
+			}

This is ludicrously horrible. Why are you using your own mult/shift and
not offset here instead of using the one from either sched_clock or
clocksource_tsc ?

With X86_FEATURE_CONSTANT_TSC, tsc_khz is actually constant after
switching from tsc_early. And the very same call to
clocks_calc_mult_shift() is used to create clocksource_tsc mult and
shift constants. Unfortunately, clocksources don't have proper API to
get the underlying info, that's why I have to calculate shift and mult
values on my own.

There is cyc2ns_read_begin() / cyc2ns_read_end(), and you can use the
VDSO thing you do below.

Looks like I missed arch-specific implementation. Thanks, I'll use it in
the next version.

And being totally inconsistent with your own alternative implementation
which uses the VDSO, which in turn uses clocksource_tsc:

With what I said above it is consistent with clocksource_tsc.


+__bpf_kfunc u64 bpf_cpu_cycles_to_ns(u64 cycles)
+{
+	const struct vdso_data *vd = __arch_get_k_vdso_data();
+
+	vd = &vd[CS_RAW];
+	/* kfunc implementation does less manipulations than vDSO
+	 * implementation. BPF use-case assumes two measurements are close
+	 * in time and can simplify the logic.
+	 */
+	return mul_u64_u32_shr(cycles, vd->mult, vd->shift);
+}

Also, if I'm not mistaken, the above is broken, you really should add
the offset, without it I don't think we guarantee the result is
monotonic.

Not quite sure how constant offset can affect monotonic guarantee of
cycles, given that the main use case will be to calculate ns out of
small deltas?

Well, when I read this patch I didn't know, because your changelogs
don't mention anything at all.

Fair, I'll improve commit message in v8, thanks.




[Index of Archives]     [Linux Samsung SoC]     [Linux Rockchip SoC]     [Linux Actions SoC]     [Linux for Synopsys ARC Processors]     [Linux NFS]     [Linux NILFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]


  Powered by Linux