Re: [PATCH] Incorrect backtracking for load/store or atomic ops

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



>    
> On Fri, Oct 20, 2023 at 9:06 AM Tao Lyu <tao.lyu@xxxxxxx> wrote:
>>
>> Hi,
>>
>> I found the backtracking logic of the eBPF verifier is flawed
>> when meeting 1) normal load and store instruction or
>> 2) atomic memory instructions.
>>
>> # Normal load and store
>>
>> Here, I show one case about the normal load and store instructions,
>> which can be exploited to achieve arbitrary read and write with two requirements:
>> 1) The uploading program should have at least CAP_BPF, which is required for most eBPF applications.
>> 2) Disable CPU mitigations by adding "mitigations=off" in the kernel booting command line. Otherwise,
>> the Spectre mitigation in the eBPF verifier will prevent exploitation.
>>
>>                                    1: r3 = r10 (stack pointer)
>>                                    3:           if cond
>>                                                  /           \
>>                                                /                \
>>         4: *(u64 *)(r3 -120) = 200      6: *(u64 *)(r3 -120) = arbitrary offset to r2
>>                  verification state 1                  verification state 2 (prune point)
>>                                               \                  /
>>                                                 \              /
>>                                       7:  r6 = *(u64 *)(r1 -120)
>>                                                          ...
>>                                     17:    r7 = a map pointer
>>                                     18:            r7 += r6
>>                          // Out-of-bound access from the right side path
>>
>> Give an eBPF program (tools/testing/selftests/bpf/test_precise.c)
>> whose simplified control flow graph looks like the above.
>> When the verifier goes through the first (left-side) path and reaches insn 18,
>> it will backtrack on register 6 like below.
>>
>> 18: (0f) r7 += r6
>> mark_precise: frame0: last_idx 18 first_idx 17 subseq_idx -1
>> mark_precise: frame0: regs=r6 stack= before 17: (bf) r7 = r0
>> ...
>> mark_precise: frame0: regs=r6 stack= before 7: (79) r6 = *(u64 *)(r3 -120)
>>
>> However, the backtracking process is problematic when it reaches insn 7.
>> Insn 7 is to load a value from the stack, but the stack pointer is represented by r3 instead of r10.
>> ** In this case, the verifier (as shown below) will reset the precision on r6 and not mark the precision on the stack. **
>> Afterward, the backtracking finishes without annotating any registers in any verifier states.
>>
>>     else if (class == BPF_LDX) {
>>         if (!bt_is_reg_set(bt, dreg))
>>             return 0;
>>         bt_clear_reg(bt, dreg);
>>         if (insn->src_reg != BPF_REG_FP)
>>             return 0;
>>         ...
>>    }
>>
>> Finally, when the second (left-side) path reaches insn 7 again,
>> it will compare the verifier states with the previous one.
>> However, it realizes these two states are equal because no precision is on r6,
>> thus the eBPF program an easily pass the verifier
>> although the second path contains an invalid access offset.
>> We have successfully exploited this bug for getting the root privilege.
>> If needed, we can share the exploitation.
>> BTW, when using the similar instructions in sub_prog can also trigger an assertion in the verifier:
>> "[ 1510.165537] verifier backtracking bug
>> [ 1510.165582] WARNING: CPU: 2 PID: 382 at kernel/bpf/verifier.c:3626 __mark_chain_precision+0x4568/0x4e50"
>>
>>
>>
>> IMO, to fully patch this bug, we need to know whether the insn->src_reg is an alias of BPF_REG_FP.
>
> Yes!
>
>> However, it might need too much code addition.
>
> No :) I don't think it's a lot of code. I've been meaning to tackle
> this for a while, but perhaps the time is now.
>
> The plan is to use jmp_history to record an extra flags for some
> instructions (even if they are not jumps). Like in this case, we can
> remember for LDX and STX instructions that they were doing register
> load/spill, and take that into account during backtrack_insn() without
> having to guess based on r10.
>
> I have part of this ready locally, I'll try to finish this up in a
> next week or two. Stay tuned (unless you want to tackle that
> yourself).

Great! I'll keep focus on this.

>
>> Or we just do not clear the precision on the src register.
>>
>> # Atomic memory instructions
>>
>> Then, I show that the backtracking on atomic load and store is also flawed.
>> As shown below, when the backtrack_insn() function in the verifier meets store instructions,
>> it checks if the stack slot is set with precision or not. If not, just return.
>>
>>             if (!bt_is_slot_set(bt, spi))
>>                 return 0;
>>             bt_clear_slot(bt, spi);
>>             if (class == BPF_STX)
>>                 bt_set_reg(bt, sreg);
>>
>> Assume we have an atomic_fetch_or instruction (tools/testing/selftests/bpf/verifier/atomic_precision.c) shown below.
>>
>> 7: (4c) w7 |= w3
>> mark_precise: frame1: last_idx 7 first_idx 0 subseq_idx -1
>> mark_precise: frame1: regs=r7 stack= before 6: (c3) r7 = atomic_fetch_or((u32 *)(r10 -120), r7)
>> mark_precise: frame1: regs=r7 stack= before 5: (bf) r7 = r10
>> mark_precise: frame1: regs=r10 stack= before 4: (7b) *(u64 *)(r3 -120) = r1
>> mark_precise: frame1: regs=r10 stack= before 3: (bf) r3 = r10
>> mark_precise: frame1: regs=r10 stack= before 2: (b7) r1 = 1000
>> mark_precise: frame1: regs=r10 stack= before 0: (85) call pc+1
>> BUG regs 400
>>
>> Before backtracking to it, r7 has already been marked as precise.
>> Since the value of r7 after atomic_fecth_or comes from r10-120,
>> it should propagate the precision to r10-120.
>> However, because the stack slot r10-120 is not marked,
>> it doesn't satisfy bt_is_slot_set(bt, spi) condition shown above.
>> Finally, it just returns without marking r10-120 as precise.
>
> this seems like the same issue with not recognizing stack access
> through any other register but r10?
>
> Or is there something specific to atomic instructions here? I'm not
> very familiar with them, so I'll need to analyse the code first.

The non-recognizing stack access through other non-r10 registers can also affect atomic instructions as atomic instructions are also memory store instructions.

But there is another aspect, that atomi ops has much complex/different semantic compared to memory store instructions. However, the backtracking logic didn't take care of it.

For example, insn r7 = atomic_fetch_or((u32 *)(r10 -120), r7) will load the original value at r10-120 to the final r7, and store the "or" result of the value at r10-120 and original r7.

Case 1: Assume r7 is marked as precise and then the backtracking code meets this instruction, it should propagate the precision to the stack slot r10-120.

Case 2: Assume r10-120 is marked as precise, after this instruction, r10-120 and r7 should both be marked precise.

>
>>
>> This bug can lead to the verifier's assertion as well:
>> "[ 1510.165537] verifier backtracking bug
>> [ 1510.165582] WARNING: CPU: 2 PID: 382 at kernel/bpf/verifier.c:3626 __mark_chain_precision+0x4568/0x4e50"
>>
>> I've attached the patch for correctly propagating the precision on atomic instructions.
>> But it still can't solve the problem that the stack slot is expressed with other registers instead of r10.
>
> I can try to solve stack access through non-r10, but it would be very
> useful if you could provide tests that trigger the above situations
> you described. Your test_precise.c test below is not the right way to
> add tests, it adds a new binary and generally doesn't fit into
> existing set of tests inside test_progs. Please see
> progs/verifier_xadd.c and prog_tests/verifier.c and try to convert
> your tests into that form (you also will be able to use inline
> assembly instead of painful BPF_xxx() instruction macros).
>
> Thanks.

Sure. I'll re-construct the test case may the end of this week because I'm attending a conference SOSP this week.

>
>>
>> Signed-off-by: Tao Lyu <tao.lyu@xxxxxxx>
>> ---
>>  kernel/bpf/verifier.c                         |  58 +++++-
>>  tools/testing/selftests/bpf/Makefile          |   6 +-
>>  tools/testing/selftests/bpf/test_precise.c    | 186 ++++++++++++++++++
>>  .../selftests/bpf/verifier/atomic_precision.c |  19 ++
>>  4 files changed, 263 insertions(+), 6 deletions(-)
>>  create mode 100644 tools/testing/selftests/bpf/test_precise.c
>>  create mode 100644 tools/testing/selftests/bpf/verifier/atomic_precision.c
>>
>
>[...]




[Index of Archives]     [Linux Samsung SoC]     [Linux Rockchip SoC]     [Linux Actions SoC]     [Linux for Synopsys ARC Processors]     [Linux NFS]     [Linux NILFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]


  Powered by Linux