Re: [PATCH v5] bpf: core: fix shift-out-of-bounds in ___bpf_prog_run

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 6/15/21 11:38 PM, Eric Biggers wrote:
On Tue, Jun 15, 2021 at 02:32:18PM -0700, Eric Biggers wrote:
On Tue, Jun 15, 2021 at 11:08:18PM +0200, Daniel Borkmann wrote:
On 6/15/21 9:33 PM, Eric Biggers wrote:
On Tue, Jun 15, 2021 at 07:51:07PM +0100, Edward Cree wrote:

As I understand it, the UBSAN report is coming from the eBPF interpreter,
   which is the *slow path* and indeed on many production systems is
   compiled out for hardening reasons (CONFIG_BPF_JIT_ALWAYS_ON).
Perhaps a better approach to the fix would be to change the interpreter
   to compute "DST = DST << (SRC & 63);" (and similar for other shifts and
   bitnesses), thus matching the behaviour of most chips' shift opcodes.
This would shut up UBSAN, without affecting JIT code generation.

Yes, I suggested that last week
(https://lkml.kernel.org/netdev/YMJvbGEz0xu9JU9D@xxxxxxxxx).  The AND will even
get optimized out when compiling for most CPUs.

Did you check if the generated interpreter code for e.g. x86 is the same
before/after with that?

Yes, on x86_64 with gcc 10.2.1, the disassembly of ___bpf_prog_run() is the same
both before and after (with UBSAN disabled).  Here is the patch I used:

diff --git a/kernel/bpf/core.c b/kernel/bpf/core.c
index 5e31ee9f7512..996db8a1bbfb 100644
--- a/kernel/bpf/core.c
+++ b/kernel/bpf/core.c
@@ -1407,12 +1407,30 @@ static u64 ___bpf_prog_run(u64 *regs, const struct bpf_insn *insn)
  		DST = (u32) DST OP (u32) IMM;	\
  		CONT;
+ /*
+	 * Explicitly mask the shift amounts with 63 or 31 to avoid undefined
+	 * behavior.  Normally this won't affect the generated code.

The last one should probably be more specific in terms of 'normally', e.g. that
it is expected that the compiler is optimizing this away for archs like x86. Is
arm64 also covered by this ... do you happen to know on which archs this won't
be the case?

Additionally, I think such comment should probably be more clear in that it also
needs to give proper guidance to JIT authors that look at the interpreter code to
see what they need to implement, in other words, that they don't end up copying
an explicit AND instruction emission if not needed there.

+	 */
+#define ALU_SHIFT(OPCODE, OP)		\
+	ALU64_##OPCODE##_X:		\
+		DST = DST OP (SRC & 63);\
+		CONT;			\
+	ALU_##OPCODE##_X:		\
+		DST = (u32) DST OP ((u32)SRC & 31);	\
+		CONT;			\
+	ALU64_##OPCODE##_K:		\
+		DST = DST OP (IMM & 63);	\
+		CONT;			\
+	ALU_##OPCODE##_K:		\
+		DST = (u32) DST OP ((u32)IMM & 31);	\
+		CONT;

For the *_K cases these are explicitly rejected by the verifier already. Is this
required here nevertheless to suppress UBSAN false positive?

  	ALU(ADD,  +)
  	ALU(SUB,  -)
  	ALU(AND,  &)
  	ALU(OR,   |)
-	ALU(LSH, <<)
-	ALU(RSH, >>)
+	ALU_SHIFT(LSH, <<)
+	ALU_SHIFT(RSH, >>)
  	ALU(XOR,  ^)
  	ALU(MUL,  *)
  #undef ALU

Note, I missed the arithmetic right shifts later on in the function.  Same
result there, though.

- Eric





[Index of Archives]     [Linux Samsung SoC]     [Linux Rockchip SoC]     [Linux Actions SoC]     [Linux for Synopsys ARC Processors]     [Linux NFS]     [Linux NILFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]


  Powered by Linux