Re: [PATCH bpf-next v7 2/4] bpf: add bpf_cpu_cycles_to_ns helper

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Mon, Nov 18, 2024 at 10:52:43AM -0800, Vadim Fedorenko wrote:

> +			if (insn->src_reg == BPF_PSEUDO_KFUNC_CALL &&
> +			    imm32 == BPF_CALL_IMM(bpf_cpu_cycles_to_ns) &&
> +			    cpu_feature_enabled(X86_FEATURE_CONSTANT_TSC)) {
> +				u32 mult, shift;
> +
> +				clocks_calc_mult_shift(&mult, &shift, tsc_khz, USEC_PER_SEC, 0);
> +				/* imul RAX, RDI, mult */
> +				maybe_emit_mod(&prog, BPF_REG_1, BPF_REG_0, true);
> +				EMIT2_off32(0x69, add_2reg(0xC0, BPF_REG_1, BPF_REG_0),
> +					    mult);
> +
> +				/* shr RAX, shift (which is less than 64) */
> +				maybe_emit_1mod(&prog, BPF_REG_0, true);
> +				EMIT3(0xC1, add_1reg(0xE8, BPF_REG_0), shift);
> +
> +				break;
> +			}

This is ludicrously horrible. Why are you using your own mult/shift and
not offset here instead of using the one from either sched_clock or
clocksource_tsc ?

And being totally inconsistent with your own alternative implementation
which uses the VDSO, which in turn uses clocksource_tsc:

> +__bpf_kfunc u64 bpf_cpu_cycles_to_ns(u64 cycles)
> +{
> +	const struct vdso_data *vd = __arch_get_k_vdso_data();
> +
> +	vd = &vd[CS_RAW];
> +	/* kfunc implementation does less manipulations than vDSO
> +	 * implementation. BPF use-case assumes two measurements are close
> +	 * in time and can simplify the logic.
> +	 */
> +	return mul_u64_u32_shr(cycles, vd->mult, vd->shift);
> +}

Also, if I'm not mistaken, the above is broken, you really should add
the offset, without it I don't think we guarantee the result is
monotonic.






[Index of Archives]     [Linux Samsung SoC]     [Linux Rockchip SoC]     [Linux Actions SoC]     [Linux for Synopsys ARC Processors]     [Linux NFS]     [Linux NILFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]


  Powered by Linux