On Mon, Nov 4, 2024 at 12:05 PM Mathias Krause <minipli@xxxxxxxxxxxxxx> wrote: > > The "staggered jumps" tests currently fail with constant blinding > enabled as the increased program size makes jump offsets overflow. > > Fix that by decreasing the number of jumps depending on the expected > size increase caused by blinding the program. > > As the test for JIT blinding makes use of bpf_jit_blinding_enabled(NULL) > and test_bpf.ko is a kernel modules, 'bpf_token_capable' and > 'bpf_jit_harden' need to be exported. > > Fixes: a7d2e752e520 ("bpf/tests: Add staggered JMP and JMP32 tests") > Cc: Johan Almbladh <johan.almbladh@xxxxxxxxxxxxxxxxx> > Signed-off-by: Mathias Krause <minipli@xxxxxxxxxxxxxx> > --- > kernel/bpf/core.c | 3 +++ > kernel/bpf/token.c | 3 +++ > lib/test_bpf.c | 19 +++++++++++++++++-- > 3 files changed, 23 insertions(+), 2 deletions(-) > > diff --git a/kernel/bpf/core.c b/kernel/bpf/core.c > index 233ea78f8f1b..fe7eada54d4b 100644 > --- a/kernel/bpf/core.c > +++ b/kernel/bpf/core.c > @@ -570,6 +570,9 @@ int bpf_jit_kallsyms __read_mostly = IS_BUILTIN(CONFIG_BPF_JIT_DEFAULT_ON); > int bpf_jit_harden __read_mostly; > long bpf_jit_limit __read_mostly; > long bpf_jit_limit_max __read_mostly; > +#if IS_MODULE(CONFIG_TEST_BPF) > +EXPORT_SYMBOL_GPL(bpf_jit_harden); > +#endif > > static void > bpf_prog_ksym_set_addr(struct bpf_prog *prog) > diff --git a/kernel/bpf/token.c b/kernel/bpf/token.c > index dcbec1a0dfb3..aed98a958c73 100644 > --- a/kernel/bpf/token.c > +++ b/kernel/bpf/token.c > @@ -26,6 +26,9 @@ bool bpf_token_capable(const struct bpf_token *token, int cap) > return false; > return true; > } > +#if IS_MODULE(CONFIG_TEST_BPF) > +EXPORT_SYMBOL_GPL(bpf_token_capable); > +#endif I don't like the extra exports and hack in patch 2 just to test jit blinding. In general lib/test_bpf is there for testing JITs path that are impossible to do with normal asm/c from selftests. I don't think jit blinding false into this category. The current code coverage is good enough. pw-bot: cr > void bpf_token_inc(struct bpf_token *token) > { > diff --git a/lib/test_bpf.c b/lib/test_bpf.c > index c1140bab280d..3469631c0aba 100644 > --- a/lib/test_bpf.c > +++ b/lib/test_bpf.c > @@ -2700,10 +2700,25 @@ static int __bpf_fill_staggered_jumps(struct bpf_test *self, > u64 r1, u64 r2) > { > int size = self->test[0].result - 1; > - int len = 4 + 3 * (size + 1); > struct bpf_insn *insns; > - int off, ind; > + int len, off, ind; > > + /* Constant blinding triples the size of each instruction making use > + * of immediate values. Tweak the test to not overflow jump offsets. > + */ > + if (bpf_jit_blinding_enabled(NULL)) { > + int bloat_factor = 2 * 3; > + > + if (BPF_SRC(jmp->code) == BPF_K) > + bloat_factor += 3; > + > + size /= bloat_factor; > + size &= ~1; > + > + self->test[0].result = size + 1; > + } > + > + len = 4 + 3 * (size + 1); > insns = kmalloc_array(len, sizeof(*insns), GFP_KERNEL); > if (!insns) > return -ENOMEM; > -- > 2.30.2 >