On Tue, Apr 02, 2019 at 04:13:58PM +0200, Daniel Borkmann wrote: > On 04/02/2019 04:08 AM, Alexei Starovoitov wrote: > > On 4/1/19 9:35 AM, Daniel Borkmann wrote: > >> On 04/01/2019 04:17 PM, Daniel Borkmann wrote: > >>> On 03/30/2019 01:16 AM, Alexei Starovoitov wrote: > >>>> Add 3 basic tests that stress verifier scalability. > >>>> > >>>> test_verif_scale1.c calls non-inlined jhash() function 90 times on > >>>> different position in the packet. > >>>> This test simulates network packet parsing. > >>>> jhash function is ~140 instructions and main program is ~1200 insns. > >>>> > >>>> test_verif_scale2.c force inlines jhash() function 90 times. > >>>> This program is ~15k instructions long. > >>>> > >>>> test_verif_scale3.c calls non-inlined jhash() function 90 times on > >>>> But this time jhash has to process 32-bytes from the packet > >>>> instead of 14-bytes in tests 1 and 2. > >>>> jhash function is ~230 insns and main program is ~1200 insns. > >>>> > >>>> $ test_progs -s > >>>> can be used to see verifier stats. > >>>> > >>>> Signed-off-by: Alexei Starovoitov <ast@xxxxxxxxxx> > >>> > >>> Do you also have some test cases with actual 1M insns resp. such that hit > >>> the new complexity limit to check timing / resources it consumes? I think > >>> these would be good to have as well as part of selftests. > >> > >> (Another thought on why it would be good to have such tests would be to > >> see if we would otherwise break long jumps again due to offset truncation > >> which we had in the core as well as in JITs in the past.) > > > > llvm truncates jump offsets for large programs. > > Unfortunately we need to introduce JA instruction > > with 32-bit offset on kernel side before we can fix llvm. > > There are plenty of other scalability issues on the kernel side. > > This patch set is the first step. > > Right. > > > Only non-jumpy program of 1m insn loads and works. > > Realistically we're still far away from accepting large programs, > > but to get there we need to bump the limit and experience these issues. > > I think my main worry is that if we're realistically still far away from > accepting 1M insns programs and have potential unknown blockers in the > road wrt verifying safety, then why lifting the limit for root to 1M > today already? > > I think the complexity limit itself is fine, but why not being more > conservative on the number of instructions to something we believe > verifier can realistically handle /today/ wrt worst case consumptions in > case of complexity attacks, and go further from there? Once it's on 1M in > UAPI it's effectively frozen and then we'll be re-adding slightly similar > upper limits again which caps the 1M to something much lower (though I > doubt we'll see such large programs any time soon). 1m is not an uapi. 4k sort-of was an uapi limit, but in practice it wasn't. Could we load all programs that fit into 4k? Obviously not. There are plenty of internal verifier limits. Like BPF_COMPLEXITY_LIMIT_STACK. They are discoverable by user space, but they are not uapi. Meaning that we can change them whichever direction. BPF_COMPLEXITY_LIMIT_STATES is 64, but I'm thinking to decrease it to 8. Here is the commit log from patch 6: "This patch removes the limit for root programs from uapi. BPF_COMPLEXITY_LIMIT_INSNS is the kernel internal limit and success to load the program no longer depends on program size, but on 'smartness' of the verifier only. The verifier will continue to get smarter with every kernel release. " Removing 4k limit for root doesn't change much from user's perspective. Every program has a different limit depending on a kernel version. I gave an example in patch 6 where folks compile the same program with different #define because verifier smartness is different between kernels. 4k was a stated limit, but irrelevant in practice. Am I happy with this situation? Of coure not. I've been talking about the requirement to support 1m instructions for a year now, but most bpf developers didn't take it seriously and I see a creep of features into the verifier that makes it harder and harder to support large programs. Most folks operate with an assumption that bpf programs are tiny, so it's ok to use inefficent algorithms. This has to change. We have to remove the limit and experience these issues otherwise that day of verifier being 'ready' will never come. With 4k limit syzbot found a way to make verifier spend 10 seconds verifying a program of ~300 instructions long. Will it find something again? No doubt. Whether the limit on program size is there or not is not going to stop fuzzing tools. The removal of instruction limit is changing bpf developer's mindset. It's changing _us_. That's the key difference. It is a requirement to support large programs and we need to make sure internals of the verifier can scale.