On Wed, Jan 3, 2024 at 10:04 PM Alexei Starovoitov <alexei.starovoitov@xxxxxxxxx> wrote: > > On Wed, Jan 3, 2024 at 11:20 AM Andrii Nakryiko > <andrii.nakryiko@xxxxxxxxx> wrote: > > > + > > > +/* C type conversions coupled with comparison operator are tricky. > > > + * Make sure BPF program is compiled with -Wsign-compre then > > > > fixed typo while applying: compare > > ohh. I cannot even copy-paste properly. > > > > + * __lhs OP __rhs below will catch the mistake. > > > + * Be aware that we check only __lhs to figure out the sign of compare. > > > + */ > > > +#define _bpf_cmp(LHS, OP, RHS, NOFLIP) \ > > > > nit: NOFLIP name is absolutely confusing, tbh, it would be nice to > > rename it to something more easily understood (DEFAULT is IMO better) > > I actually had it as DEFAULT, but it was less clear. > NOFLIP means whether the condition should be flipped or not. > DEFAULT is too ambiguous. LIKELY (or UNLIKELY, whichever makes sense) might be another suggestion. But it's minor, so feel free to ignore. > > > > + ({ \ > > > + typeof(LHS) __lhs = (LHS); \ > > > + typeof(RHS) __rhs = (RHS); \ > > > + bool ret; \ > > > + _Static_assert(sizeof(&(LHS)), "1st argument must be an lvalue expression"); \ > > > + (void)(__lhs OP __rhs); \ > > > > what is this part for? Is this what "__lhs OP __rhs below will catch > > the mistake" in the comment refers to? > > Yes. This one will give proper error either on GCC -Wall > or with clang -Wall -Wsign-compare. > > > > > BTW, I think we hit this one when invalid OP is specified, before we > > get to (void)"bug" (see below). So maybe it would be better to push it > > deeper into __bpf_cmp itself? > > > > > + if (__cmp_cannot_be_signed(OP) || !__is_signed_type(typeof(__lhs))) {\ > > > + if (sizeof(__rhs) == 8) \ > > > + ret = __bpf_cmp(__lhs, OP, "", "r", __rhs, NOFLIP); \ > > > + else \ > > > + ret = __bpf_cmp(__lhs, OP, "", "i", __rhs, NOFLIP); \ > > > > tbh, I'm also not 100% sure why this sizeof() == 8 and corresponding r > > vs i change? Can you please elaborate (and add a comment in a follow > > up, perhaps?) > > That was in the commit log: > " > Note that the macros has to be careful with RHS assembly predicate. > Since: > u64 __rhs = 1ull << 42; > asm goto("if r0 < %[rhs] goto +1" :: [rhs] "ri" (__rhs)); > LLVM will silently truncate 64-bit constant into s32 imm. > " > > I will add a comment to the code as well. ah, ok, it didn't click for me, thanks for adding a comment while on the topic, is there a problem specifying "ri" for sizeof() < 8 case? At least for alu32 cases, shouldn't we allow w1 < w2 kind of situations? > > > > > > + } else { \ > > > + if (sizeof(__rhs) == 8) \ > > > + ret = __bpf_cmp(__lhs, OP, "s", "r", __rhs, NOFLIP); \ > > > + else \ > > > + ret = __bpf_cmp(__lhs, OP, "s", "i", __rhs, NOFLIP); \ > > > > one simplification that would reduce number of arguments to __bpf_cmp: > > > > in __bpf_cmp (drop # before OP): > > > > asm volatile goto("if %[lhs] " OP " %[rhs] goto %l[l_true]" > > > > > > And here you just stringify operator, appending "s" if necessary: > > > > ret = __bpf_cmp(__lhs, #OP, "i", __rhs, NOFLIP); > > > > or > > > > ret = __bpf_cmp(__lhs, "s"#OP, "r", __rhs, NOFLIP); > > > > Consider for a follow up, if you agree it is a simplification. > > Just to reduce the number of arguments? I will give it a try. yeah, pretty much just for that > > > I actually like a more verbose but also more explicit way of likely > > implementation, listing explicitly supported operators. It will also > > be easier for users to check what they are supposed to pass. So that's > > another thing to maybe do in the follow up? > > I thought about making unlikely explicit, but it looked weird and noisy: > if (__builtin_strcmp(#OP, "==") == 0 || > __builtin_strcmp(#OP, "!=") == 0 || > __builtin_strcmp(#OP, "<") == 0 || > ... > and all the noise just to make unlikely match likely. > > I felt it's not worth it and the error in the current form as you noticed: > > > progs/iters_task_vma.c:31:7: error: invalid operand for instruction > > 31 | if (bpf_cmp_unlikely(seen, <<, 1000)) > > <inline asm>:1:8: note: instantiated into assembly here > > 1 | if r6 << 1000 goto LBB0_6 > > is much cleaner instead of (void) return. Right, I only played with unlikely last time. Trying the same with likely right now, it's not that great: In file included from progs/profiler2.c:6: progs/profiler.inc.h:225:7: error: variable 'ret' is uninitialized when used here [-Werror,-Wuninitialized] 225 | if (bpf_cmp_likely(filepart_length, <==, MAX_PATH)) { | ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ /data/users/andriin/linux/tools/testing/selftests/bpf/bpf_experimental.h:322:3: note: expanded from macro 'bpf_cmp_likely' 322 | ret; \ | ^~~ progs/profiler.inc.h:225:7: note: variable 'ret' is declared here /data/users/andriin/linux/tools/testing/selftests/bpf/bpf_experimental.h:307:3: note: expanded from macro 'bpf_cmp_likely' 307 | bool ret; \ | ^ I tried adding _Static_assert, and it is triggered for all valid cases as well, so it seems like the compiler doesn't detect this last branch as dead code, unfortunately. > > I've tried many different ways to have a helper macro > #define flip_cmp_opcode(OP) > and use it inside the bpf_cmp, but couldn't make it work. > This is the shortest version of macros I came up with.