On Wed, Jan 3, 2024 at 11:20 AM Andrii Nakryiko <andrii.nakryiko@xxxxxxxxx> wrote: > > + > > +/* C type conversions coupled with comparison operator are tricky. > > + * Make sure BPF program is compiled with -Wsign-compre then > > fixed typo while applying: compare ohh. I cannot even copy-paste properly. > > + * __lhs OP __rhs below will catch the mistake. > > + * Be aware that we check only __lhs to figure out the sign of compare. > > + */ > > +#define _bpf_cmp(LHS, OP, RHS, NOFLIP) \ > > nit: NOFLIP name is absolutely confusing, tbh, it would be nice to > rename it to something more easily understood (DEFAULT is IMO better) I actually had it as DEFAULT, but it was less clear. NOFLIP means whether the condition should be flipped or not. DEFAULT is too ambiguous. > > + ({ \ > > + typeof(LHS) __lhs = (LHS); \ > > + typeof(RHS) __rhs = (RHS); \ > > + bool ret; \ > > + _Static_assert(sizeof(&(LHS)), "1st argument must be an lvalue expression"); \ > > + (void)(__lhs OP __rhs); \ > > what is this part for? Is this what "__lhs OP __rhs below will catch > the mistake" in the comment refers to? Yes. This one will give proper error either on GCC -Wall or with clang -Wall -Wsign-compare. > > BTW, I think we hit this one when invalid OP is specified, before we > get to (void)"bug" (see below). So maybe it would be better to push it > deeper into __bpf_cmp itself? > > > + if (__cmp_cannot_be_signed(OP) || !__is_signed_type(typeof(__lhs))) {\ > > + if (sizeof(__rhs) == 8) \ > > + ret = __bpf_cmp(__lhs, OP, "", "r", __rhs, NOFLIP); \ > > + else \ > > + ret = __bpf_cmp(__lhs, OP, "", "i", __rhs, NOFLIP); \ > > tbh, I'm also not 100% sure why this sizeof() == 8 and corresponding r > vs i change? Can you please elaborate (and add a comment in a follow > up, perhaps?) That was in the commit log: " Note that the macros has to be careful with RHS assembly predicate. Since: u64 __rhs = 1ull << 42; asm goto("if r0 < %[rhs] goto +1" :: [rhs] "ri" (__rhs)); LLVM will silently truncate 64-bit constant into s32 imm. " I will add a comment to the code as well. > > > + } else { \ > > + if (sizeof(__rhs) == 8) \ > > + ret = __bpf_cmp(__lhs, OP, "s", "r", __rhs, NOFLIP); \ > > + else \ > > + ret = __bpf_cmp(__lhs, OP, "s", "i", __rhs, NOFLIP); \ > > one simplification that would reduce number of arguments to __bpf_cmp: > > in __bpf_cmp (drop # before OP): > > asm volatile goto("if %[lhs] " OP " %[rhs] goto %l[l_true]" > > > And here you just stringify operator, appending "s" if necessary: > > ret = __bpf_cmp(__lhs, #OP, "i", __rhs, NOFLIP); > > or > > ret = __bpf_cmp(__lhs, "s"#OP, "r", __rhs, NOFLIP); > > Consider for a follow up, if you agree it is a simplification. Just to reduce the number of arguments? I will give it a try. > I actually like a more verbose but also more explicit way of likely > implementation, listing explicitly supported operators. It will also > be easier for users to check what they are supposed to pass. So that's > another thing to maybe do in the follow up? I thought about making unlikely explicit, but it looked weird and noisy: if (__builtin_strcmp(#OP, "==") == 0 || __builtin_strcmp(#OP, "!=") == 0 || __builtin_strcmp(#OP, "<") == 0 || ... and all the noise just to make unlikely match likely. I felt it's not worth it and the error in the current form as you noticed: > progs/iters_task_vma.c:31:7: error: invalid operand for instruction > 31 | if (bpf_cmp_unlikely(seen, <<, 1000)) > <inline asm>:1:8: note: instantiated into assembly here > 1 | if r6 << 1000 goto LBB0_6 is much cleaner instead of (void) return. I've tried many different ways to have a helper macro #define flip_cmp_opcode(OP) and use it inside the bpf_cmp, but couldn't make it work. This is the shortest version of macros I came up with.