[PATCH RFC bpf v1 0/3] Verifier callback handling

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi, these are fixes for issues I noticed while working on the callback code.
It is an RFC because I would like to discuss how to fix one other problem,
I have a rough idea but would like to discuss before working on it, since
it isn't trivial. The conservative fix doesn't pass existing selftests.

It seems like the problem anticipated in [0] actually exists with all the
current callback based loop helpers, in a different form. The main reason is
that callback function is only explored as if it was executing once.

There are some example cases included in selftests to show the problem, and make
sure we keep catching and rejecting them. Since callback which is loop body is
only verified once, once the for_each helper returns the execution state from
verifier's perspective is as if callback executed once, but ofcourse it may
execute multiple times, unknowable statically.

Patch 1 fixes the case of acquiring and releasing references, unless I missed
some corner case it seems to mostly work fine.

In the future when we have a case where a sycnhronous callback will only be
executed once by a helper, we can improve this case by relaxing these
restrictions, so transfer of references and verifying callback as if it executes
only once can be done. For now, all in_callback_fn function get executed
multiple times, hence that change is left for when it will be needed.

Now, the important part:
------------------------

The included selftest will fail/crash, as one fix is missing from the series.
Whatever I tried so far wasn't backwards compatible with existing selftests
(e.g. conservatively marking all reads as UNKNOWN, and all writes scrubbing
stack slots). Some tests e.g. read skb pointer from ptr_to_stack passed to
callback.

It is not clear to me what the correct fix would be to handle reads and writes
from the stack inside callback function. If a stack slot is never touched inside
the callback in any of the states, it should be safe to allow it to load spilled
registers from caller's stack frame, even allowing precise scalars would be safe
if they are read from untouched slots.

The idea I have is roughly:

In the first pass, verify as if callback is executing once. This is the same
case as of now, this may e.g. load a spilled precise scalar, then overwrite it,
so insns before the store doing overwrite verification consider register as
precise. In reality, in later loop iterations though this assumption may be
wrong.

For e.g.
int i = 1;
int arr[2] = {0};

bpf_loop(100, cb, &i, 0);

cb:
	int i = *ctx;		 // seen as 1
	func(*((int *)ctx - i)); // stack read ok, arr[1]
	*ctx = i + 1;		 // i is now 2

verification ends here, but this would start reading uninit stack in later
iterations.

To fix this, we keep a bitmap of scratched slots for the stack in caller frame.
Once we see BPF_EXIT, we do reverification of the callback, but this time mark
_any_ register reading from slots that were scratched the first time around as
mark_reg_unknown (i.e. not precise). Moreover, in the caller stack it will be
marked STACK_MISC. It might not be able to track <8 byte writes but I guess
that is fine (since that would require 512 bits instead of 64).

The big problem I can see in this is that the precise register guards another
write in another branch that was not taken the first time around. In that case
push_stack is not done to explore the other branch, and is_branch_taken is used
to predict the outcome.

But: if this has to cause a problem, the precise reg guarding the branch must
change its value (and that we can see if it is being from a scratched slot
read), so by doing another pass we would get rid of this problem. I.e. one more
pass would be needed after downgrading precise registers to unknown.

If I can get some hints for new ideas or feedback on whether this would be the
correct approach, I can work on a proper fix for this case.

It seems like roughly the same approach will be required for open coded
iteration mentioned in [0], so we might be able to kill two birds with one shot.

Thanks!

  [0]: https://lore.kernel.org/bpf/33123904-5719-9e93-4af2-c7d549221520@xxxxxx

Kumar Kartikeya Dwivedi (3):
  bpf: Move bpf_loop and bpf_for_each_map_elem under CAP_BPF
  bpf: Fix reference state management for synchronous callbacks
  selftests/bpf: Add tests for callbacks fixes

 include/linux/bpf_verifier.h                  |  11 ++
 kernel/bpf/helpers.c                          |   8 +-
 kernel/bpf/verifier.c                         |  42 +++--
 .../selftests/bpf/prog_tests/cb_refs.c        |  49 ++++++
 tools/testing/selftests/bpf/progs/cb_refs.c   | 152 ++++++++++++++++++
 5 files changed, 249 insertions(+), 13 deletions(-)
 create mode 100644 tools/testing/selftests/bpf/prog_tests/cb_refs.c
 create mode 100644 tools/testing/selftests/bpf/progs/cb_refs.c

-- 
2.34.1




[Index of Archives]     [Linux Samsung SoC]     [Linux Rockchip SoC]     [Linux Actions SoC]     [Linux for Synopsys ARC Processors]     [Linux NFS]     [Linux NILFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]


  Powered by Linux