Re: [PATCH bpf-next 1/2] bpf: Fix a sdiv overflow issue

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




On 9/11/24 7:18 AM, Daniel Borkmann wrote:
On 9/11/24 6:40 AM, Yonghong Song wrote:
Zac Ecob reported a problem where a bpf program may cause kernel crash due
to the following error:
   Oops: divide error: 0000 [#1] PREEMPT SMP KASAN PTI

The failure is due to the below signed divide:
   LLONG_MIN/-1 where LLONG_MIN equals to -9,223,372,036,854,775,808.
LLONG_MIN/-1 is supposed to give a positive number 9,223,372,036,854,775,808,
but it is impossible since for 64-bit system, the maximum positive
number is 9,223,372,036,854,775,807. On x86_64, LLONG_MIN/-1 will
cause a kernel exception. On arm64, the result for LLONG_MIN/-1 is
LLONG_MIN.

So for 64-bit signed divide (sdiv), some additional insns are patched
to check LLONG_MIN/-1 pattern. If such a pattern does exist, the result
will be LLONG_MIN. Otherwise, it follows normal sdiv operation.

I presume this could be follow-up but it would also need an update to [0]
to describe the behavior.

  [0] Documentation/bpf/standardization/instruction-set.rst

I will do this as a follow-up. Will cover all cases including this patch
plus existing patched insn to handle r1/r2 and r1%r2 where runtime check r2
could be 0.


   [1] https://lore.kernel.org/bpf/tPJLTEh7S_DxFEqAI2Ji5MBSoZVg7_G-Py2iaZpAaWtM961fFTWtsnlzwvTbzBzaUzwQAoNATXKUlt0LZOFgnDcIyKCswAnAGdUF3LBrhGQ=@protonmail.com/

Reported-by: Zac Ecob <zacecob@xxxxxxxxxxxxxx>
Signed-off-by: Yonghong Song <yonghong.song@xxxxxxxxx>
---
  kernel/bpf/verifier.c | 29 ++++++++++++++++++++++++++---
  1 file changed, 26 insertions(+), 3 deletions(-)

diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
index f35b80c16cda..d77f1a05a065 100644
--- a/kernel/bpf/verifier.c
+++ b/kernel/bpf/verifier.c
@@ -20506,6 +20506,7 @@ static int do_misc_fixups(struct bpf_verifier_env *env)
              insn->code == (BPF_ALU | BPF_DIV | BPF_X)) {
              bool is64 = BPF_CLASS(insn->code) == BPF_ALU64;
              bool isdiv = BPF_OP(insn->code) == BPF_DIV;
+            bool is_sdiv64 = is64 && isdiv && insn->off == 1;
              struct bpf_insn *patchlet;
              struct bpf_insn chk_and_div[] = {
                  /* [R,W]x div 0 -> 0 */
@@ -20525,10 +20526,32 @@ static int do_misc_fixups(struct bpf_verifier_env *env)
                  BPF_JMP_IMM(BPF_JA, 0, 0, 1),
                  BPF_MOV32_REG(insn->dst_reg, insn->dst_reg),
              };
+            struct bpf_insn chk_and_sdiv64[] = {
+                /* Rx sdiv 0 -> 0 */
+                BPF_RAW_INSN(BPF_JMP | BPF_JNE | BPF_K, insn->src_reg,
+                         0, 2, 0),
+                BPF_ALU32_REG(BPF_XOR, insn->dst_reg, insn->dst_reg),
+                BPF_JMP_IMM(BPF_JA, 0, 0, 8),
+                /* LLONG_MIN sdiv -1 -> LLONG_MIN */
+                BPF_RAW_INSN(BPF_JMP | BPF_JNE | BPF_K, insn->src_reg,
+                         0, 6, -1),
+                BPF_LD_IMM64(insn->src_reg, LLONG_MIN),
+                BPF_RAW_INSN(BPF_JMP | BPF_JNE | BPF_X, insn->dst_reg,
+                         insn->src_reg, 2, 0),
+                BPF_MOV64_IMM(insn->src_reg, -1),
+                BPF_JMP_IMM(BPF_JA, 0, 0, 2),
+                BPF_MOV64_IMM(insn->src_reg, -1),

Looks good, we could probably shrink this snippet via BPF_REG_AX ?
Untested, like below:

+                /* Rx sdiv 0 -> 0 */
+                BPF_RAW_INSN(BPF_JMP | BPF_JNE | BPF_K, insn->src_reg, 0, 2, 0),
+                BPF_ALU32_REG(BPF_XOR, insn->dst_reg, insn->dst_reg),
+                BPF_JMP_IMM(BPF_JA, 0, 0, 5),
+                /* LLONG_MIN sdiv -1 -> LLONG_MIN */
+                BPF_RAW_INSN(BPF_JMP | BPF_JNE | BPF_K, insn->src_reg, 0, 2, -1),
+                BPF_LD_IMM64(BPF_REG_AX, LLONG_MIN),
+                BPF_RAW_INSN(BPF_JMP | BPF_JEQ | BPF_X, insn->dst_reg, BPF_REG_AX, 1, 0),
+                *insn,

Then we don't need to restore the src_reg in both paths.

Indeed, this is much simpler. I forgot to use BPF_REG_AX somehow...


+                *insn,
+            };

Have you also looked into rejecting this pattern upfront on load when its a known
constant as we do with div by 0 in check_alu_op()?

We probably cannot do this for this sdiv case. For example,
r1/0 or r1%0 can be rejected by verifier.
But r1/-1 cannot be rejected as most likely r1 is not a constant LLONG_MIN.
But if the divisor is constant -1, we can patch insn to handle case r1/-1.


Otherwise lgtm if this is equivalent to arm64 as you describe.

-            patchlet = isdiv ? chk_and_div : chk_and_mod;
-            cnt = isdiv ? ARRAY_SIZE(chk_and_div) :
-                      ARRAY_SIZE(chk_and_mod) - (is64 ? 2 : 0);
+            if (is_sdiv64) {
+                patchlet = chk_and_sdiv64;
+                cnt = ARRAY_SIZE(chk_and_sdiv64);
+            } else {
+                patchlet = isdiv ? chk_and_div : chk_and_mod;
+                cnt = isdiv ? ARRAY_SIZE(chk_and_div) :
+                          ARRAY_SIZE(chk_and_mod) - (is64 ? 2 : 0);
+            }
                new_prog = bpf_patch_insn_data(env, i + delta, patchlet, cnt);
              if (!new_prog)






[Index of Archives]     [Linux Samsung SoC]     [Linux Rockchip SoC]     [Linux Actions SoC]     [Linux for Synopsys ARC Processors]     [Linux NFS]     [Linux NILFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]


  Powered by Linux