Re: [PATCH bpf-next v5 1/4] bpf: add bpf_get_cpu_cycles kfunc

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Tue, 2024-11-12 at 21:39 +0000, Vadim Fedorenko wrote:

[...]

> > > +			if (insn->src_reg == BPF_PSEUDO_KFUNC_CALL &&
> > > +			    imm32 == BPF_CALL_IMM(bpf_get_cpu_cycles)) {
> > > +				/* Save RDX because RDTSC will use EDX:EAX to return u64 */
> > > +				emit_mov_reg(&prog, true, AUX_REG, BPF_REG_3);
> > > +				if (boot_cpu_has(X86_FEATURE_LFENCE_RDTSC))
> > > +					EMIT_LFENCE();
> > > +				EMIT2(0x0F, 0x31);
> > > +
> > > +				/* shl RDX, 32 */
> > > +				maybe_emit_1mod(&prog, BPF_REG_3, true);
> > > +				EMIT3(0xC1, add_1reg(0xE0, BPF_REG_3), 32);
> > > +				/* or RAX, RDX */
> > > +				maybe_emit_mod(&prog, BPF_REG_0, BPF_REG_3, true);
> > > +				EMIT2(0x09, add_2reg(0xC0, BPF_REG_0, BPF_REG_3));
> > > +				/* restore RDX from R11 */
> > > +				emit_mov_reg(&prog, true, BPF_REG_3, AUX_REG);
> > 
> > Note: The default implementation of this kfunc uses __arch_get_hw_counter(),
> >        which is implemented as `(u64)rdtsc_ordered() & S64_MAX`.
> >        Here we don't do `& S64_MAX`.
> >        The masking in __arch_get_hw_counter() was added by this commit:
> >        77750f78b0b3 ("x86/vdso: Fix gettimeofday masking").
> 
> I think we already discussed it with Alexey in v1, we don't really need
> any masking here for BPF case. We can use values provided by CPU
> directly. It will never happen that within one BPF program we will have
> inlined and non-inlined implementation of this helper, hence the values
> to compare will be of the same source.
> 
> >        Also, the default implementation does not issue `lfence`.
> >        Not sure if this makes any real-world difference.
> 
> Well, it actually does. rdtsc_ordered is translated into `lfence; rdtsc`
> or `rdtscp` (which is rdtsc + lfence + u32 cookie) depending on the cpu
> features.

I see the following disassembly:

0000000000008980 <bpf_get_cpu_cycles>:
; {
    8980: f3 0f 1e fa                   endbr64
    8984: e8 00 00 00 00                callq   0x8989 <bpf_get_cpu_cycles+0x9>
                0000000000008985:  R_X86_64_PLT32       __fentry__-0x4
;       asm volatile(ALTERNATIVE_2("rdtsc",
    8989: 0f 31                         rdtsc
    898b: 90                            nop
    898c: 90                            nop
    898d: 90                            nop
;       return EAX_EDX_VAL(val, low, high);
    898e: 48 c1 e2 20                   shlq    $0x20, %rdx
    8992: 48 09 d0                      orq     %rdx, %rax
    8995: 48 b9 ff ff ff ff ff ff ff 7f movabsq $0x7fffffffffffffff, %rcx # imm = 0x7FFFFFFFFFFFFFFF
;               return (u64)rdtsc_ordered() & S64_MAX;
    899f: 48 21 c8                      andq    %rcx, %rax
;       return __arch_get_hw_counter(1, NULL);
    89a2: 2e e9 00 00 00 00             jmp     0x89a8 <bpf_get_cpu_cycles+0x28>

Is it patched when kernel is loaded to replace nops with lfence?
By real-world difference I meant difference between default
implementation and inlined assembly.

[...]

> > > @@ -20488,6 +20510,12 @@ static int fixup_kfunc_call(struct bpf_verifier_env *env, struct bpf_insn *insn,
> > >   						node_offset_reg, insn, insn_buf, cnt);
> > >   	} else if (desc->func_id == special_kfunc_list[KF_bpf_cast_to_kern_ctx] ||
> > >   		   desc->func_id == special_kfunc_list[KF_bpf_rdonly_cast]) {
> > > +		if (!verifier_inlines_kfunc_call(env, imm)) {
> > > +			verbose(env, "verifier internal error: kfunc id %d is not defined in checker\n",
> > > +				desc->func_id);
> > > +			return -EFAULT;
> > > +		}
> > > +
> > 
> > Nit: still think that moving this check as the first conditional would
> >       have been better:
> > 
> >       if (verifier_inlines_kfunc_call(env, imm)) {
> >          if (desc->func_id == special_kfunc_list[KF_bpf_cast_to_kern_ctx] ||
> >             desc->func_id == special_kfunc_list[KF_bpf_rdonly_cast]) {
> >             // ...
> >          } else {
> >             // report error
> >          }
> >       } else if (...) {
> >         // ... rest of the cases
> >       }
> 
> I can change it in the next iteration (if it's needed) if you insist

No need to change if there would be no next iteration.

[...]






[Index of Archives]     [Linux Samsung SoC]     [Linux Rockchip SoC]     [Linux Actions SoC]     [Linux for Synopsys ARC Processors]     [Linux NFS]     [Linux NILFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]


  Powered by Linux