On Tue, Oct 24, 2023 at 2:26 PM Paul Chaignon <paul.chaignon@xxxxxxxxx> wrote: > > On Mon, Oct 23, 2023 at 10:51:57PM -0700, Andrii Nakryiko wrote: > > On Mon, Oct 23, 2023 at 3:50 PM Andrii Nakryiko > > <andrii.nakryiko@xxxxxxxxx> wrote: > > > > > > On Mon, Oct 23, 2023 at 8:52 AM Paul Chaignon <paul@xxxxxxxxxxxxx> wrote: > > > > > > > > On Mon, Oct 23, 2023 at 10:05:41PM +0800, Shung-Hsi Yu wrote: > > > > > On Sat, Oct 21, 2023 at 09:42:46PM -0700, Andrii Nakryiko wrote: > > > > > > On Fri, Oct 20, 2023 at 10:37 AM Srinivas Narayana Ganapathy > > > > > > <sn624@xxxxxxxxxxxxxx> wrote: > > > > > > > > > > > > > > Hi all, > > > > > > > > > > > > > > Thanks, @Shung-Hsi, for bringing up this conversation about > > > > > > > integrating formal verification approaches into the BPF CI and testing. > > > > > > > > > > > > > > > On 19-Oct-2023, at 1:34 PM, Andrii Nakryiko <andrii.nakryiko@xxxxxxxxx> wrote: > > > > > > > > On Thu, Oct 19, 2023 at 12:52 AM Shung-Hsi Yu <shung-hsi.yu@xxxxxxxx> wrote: > > > > > > > >> On Thu, Oct 19, 2023 at 03:30:33PM +0800, Shung-Hsi Yu wrote: > > > > > > > > [...] > > > > > > > > > > > >>> FWIW an alternative approach that speeds things up is to use model checkers > > > > > > > >>> like Z3 or CBMC. On my laptop, using Z3 to validate tnum_add() against *all* > > > > > > > >>> possible inputs takes less than 1.3 seconds[3] (based on code from [1] > > > > > > > >>> paper, but I somehow lost the link to their GitHub repository). > > > > > > > >> > > > > > > > >> Found it. For reference, code used in "Sound, Precise, and Fast Abstract > > > > > > > >> Interpretation with Tristate Numbers"[1] can be found at > > > > > > > >> https://github.com/bpfverif/tnums-cgo22/blob/main/verification/tnum.py > > > > > > > >> > > > > > > > >> Below is a truncated form of the above that only check tnum_add(), requires > > > > > > > >> a package called python3-z3 on most distros: > > > > > > > > > > > > > > > > Great! I'd be curious to see how range tracking logic can be encoded > > > > > > > > using this approach, please give it a go! > > > > > > > > > > > > > > We have some recent work that applies formal verification approaches > > > > > > > to the entirety of range tracking in the eBPF verifier. We posted a > > > > > > > note to the eBPF mailing list about it sometime ago: > > > > > > > > > > > > > > [1] https://lore.kernel.org/bpf/SJ2PR14MB6501E906064EE19F5D1666BFF93BA@xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx/T/#u > > > > > > > > > > > > Oh, I totally missed this, as I just went on a long vacation a few > > > > > > days before that and declared email bankruptcy afterwards. I'll try to > > > > > > give it a read, though I see lots of math symbols there and make no > > > > > > promises ;) > > > > > > > > > > Feels the same when I start reading their previous work, but I can vouch > > > > > their work their work are definitely worth the read. (Though I had to admit > > > > > I secretly chant "math is easier than code, math is easier than code" to > > > > > convincing my mind to not go into flight mode when seeing math symbols ;D > > > > > > > > Hari et al. did a great job at explaining the intuitions throughout the > > > > paper. So even if you skip the math, you should be able to follow. > > > > > > > > Having an understanding of abstract interpretation helps. The Mozilla > > > > wiki has a great one [1] and I wrote a shorter BPF example of it [2]. > > > > > > > > 1 - https://wiki.mozilla.org/Abstract_Interpretation > > > > 2 - https://pchaigno.github.io/abstract-interpretation.html > > > > > > > > > > thanks :) > > > > Hey Paul, > > > > I had a bunch of time on the plane to catch up on reading, so I read > > your blog post about PREVAIL among other things. Hopefully you don't > > mind some comments posted here. > > > > > Observation 1. The analysis must track binary relations among registers. > > > > It's curious that the BPF verifier in kernel actually doesn't track > > relations in this sense, and yet it works for lots and lots of > > practical programs. :) > > It kind of does, but only for the types where it's actually needed. For > example if we have: > > 3: (bf) r3 = r1 > 4: (07) r3 += 14 > 5: (2d) if r3 > r2 goto pc+50 > R1_w=pkt(id=0,off=0,r=14,imm=0) R2_w=pkt_end(id=0,off=0,imm=0) > R3_w=pkt(id=0,off=14,r=14,imm=0) > > R1's range actually refers to the relationship to pkt_end, R2 at this > point. So, R1's r=14 carries the same information as R1 + 14 <= R2. > > A big difference is that the Linux verifier is very tailored to eBPF. So > it doesn't perform this sort of more complicated tracking for all > registers and slack slots. I suspect that plays a bit role in the lower > performance of PREVAIL. > Yes, PACKET_END is a special case, so you are right. I had SCALAR_VALUE registers in mind, and for those we don't track relationships (even if in some cases that could help with some compiler optimizations). > > > > > > (Speaking of kernel verifier implementation) > > > > > Third, it does not currently support programs with loops. > > > > It does, and I'm not even talking about bounded loops that are > > basically unrolled as many times as necessary. We have bpf_loop() > > helper calling given callback as many times as requested, but even > > better we now have open-coded iterators. Please check selftests using > > bpf_for() macro. > > > > Note that Eduard is fixing discovered issues in open-coded iterators > > convergence checks and logic, but other than that BPF does have real > > loops. > > I'll fix that. I also wasn't aware of the more recent bpf_for. Nice! > > The larger point is that PREVAIL is able to fully verify free-form > loops. It doesn't impose a structure or rely on trusted code (helpers > and kfuncs). > In the end, I don't think it matters much. The amount of trusted code > for this is small and well understood. And we probably don't want > ill-structured loops in our C code anyway. But the lack of loop support > used to take a lot of attention when speaking of the BPF verifier, so I > guess that's why it ended up being a selling point in the paper. Yeah, I get that. I only pointed this out because your blog post written in September of 2023 will be read by someone and they will think that BPF verifier still doesn't support loops, which is not true :) So it would be nice to point out that this is possible. > > > > > > > And about a confusing bit at the end: > > > > > 0: r0 = 0 > > > 1: if r1 > 10 goto pc+4 // r1 ∈ [0; 10] > > > 2: if r2 > 10 goto pc+3 // r2 ∈ [0; 10] > > > 3: r1 *= r2 // r1 ∈ [0; 100] > > > 4: if r1 != 5 goto pc+1 > > > 5: r1 /= r0 // Division by zero! > > > 6: exit > > > > > > After instruction 2, both r1 and r2 have abstract value [0; 10]. > > > After instruction 3, r1 holds the multiplication of r1 and r2 and > > > therefore has abstract value [0; 100]. When considering the condition at > > > instruction 4, because 11 ∈ [0; 100], we will walk both paths and hit the > > > division by zero. > > > > > > Except we know that r1 can never take value 11. The number 11 is a prime > > > number, so it is not a multiple of any integers between 0 and 10. > > > > This made me pause for a while. I think you meant to have `4: if r1 != > > 11 goto pc+1` and then go on to explain that you can't get 11 by > > multiplying numbers in range [0; 10], because 11 is greater than 10 > > (so can't be 1 x 11), but also it's a prime number, so you can't get > > it from multiplication of two integers. So please consider fixing up > > the code example, and perhaps elaborate a bit more on why 11 can't > > happen in actuality. This example does look a bit academic, but, well, > > formal methods and stuff, it's fitting! ;) > > You're absolutely right. It should be 11 instead of 5. That's what I get > for changing the ranges from [0; 4] to [0; 10] at the last minute. > > > > > > > > It’s a bit disappointing that the paper doesn’t include any comparison with the Linux verifier on the same corpus of BPF programs. > > > > Indeed, I concur. It would be interesting to have this comparison as > > of the most recent version of PREVAIL and Linux's BPF verifier. > > > > > > Either way, thanks a lot for the very approachable and informative > > blog post, it was a pleasure to read! Great work! > > Thanks for the read and the review! I like reading about what academia > is doing around BPF so certainly not my last post on those topics :) > > > > > > > > > > > > > > > > > > > > Our paper, also posted on [1], appeared at Computer Aided Verification (CAV)’23. > > > > > > > > > > > > > > [2] https://people.cs.rutgers.edu/~sn624/papers/agni-cav23.pdf > > > > > > > > > > > > > > Together with @Paul Chaignon and @Harishankar Vishwanathan (CC'ed), we > > > > > > > are working to get our tooling into a form that is integrable into BPF > > > > > > > CI. We will look forward to your feedback when we post patches. > > > > > > > > > > > > If this could be integrated in a way that we can regularly run this > > > > > > and validate latest version of verifier, that would be great. I have a > > > > > > second part of verifier changes coming up that extends range tracking > > > > > > logic further to support range vs range (as opposed to range vs const > > > > > > that we do currently) comparisons and is_branch_taken, so having > > > > > > independent and formal verification of these changes would be great! > > > > > > > > The current goal is to have this running somewhere regularly (maybe > > > > releases + manual triggers) in a semi-automated fashion. The two > > > > challenges today are the time it takes to run verification (days without > > > > parallelization) and whether the bit of conversion & glue code will be > > > > maintanable long term. > > > > > > > > I'm fairly optimistic on the first as we're already down to hours with > > > > basic parallelization. The second is harder to predict, but I guess your > > > > patches will be a good exercice :) > > > > > > > > I've already ran the verification on v6.0 to v6.3; v6.4 is currently > > > > running. Hari et al. had verified v4.14 to v5.19 before. I'll give it a > > > > try on this patchset afterward. > > > > > > Cool, that's great! The second part of this work will be generalizing > > > this logic in kernel to support range vs range comparisons, so I'd > > > appreciate it if you could validate that one as well. I'm finalizing > > > it, but will wait for this patch set to land first before posting > > > second part to have a proper CI testing runs (and limit amount of code > > > review to be done). > > Happy to! I'm planning to post all my changes as one series in a few days, so maybe you can take it for a spin at that point? > > > > > > > BTW, I've since did some more changes to this "selftests" to be a bit > > > more parallelizable, so this range_vs_consts set of tests now can run > > > in about 5 minutes on 8+ core QEMU instance. In the second part we'll > > > have range-vs-range, so we have about 106 million cases and it takes > > > slightly more than 8 hours single-threaded. But with parallelization, > > > it's done in slightly more than one hour. > > > > > > So, of course, still too slow to run as part of normal test_progs run, > > > but definitely easy to run locally to validate kernel changes (and > > > probably makes sense to enable on some nightly CI runs, when we have > > > them). > > > > > > Regardless, my point is that both methods of verification are > > > complementary, I think, and it's good to have both available and > > > working on latest kernel versions. > > Completely agree! > > > > > > > > > > > > > > > > > > > > > +1 (from a quick skim) this work is already great as-is, and it'd be even > > > > > better once it get's in the CI. From the paper there's this > > > > > > > > > > We conducted our experiments on ... a machine with two 10-core Intel > > > > > Skylake CPUs running at 2.20 GHz with 192 GB of memory... > > > > > > > > > > I suppose the memory requirement comes from the vast amount of state space > > > > > that the Z3 SMT solver have to go through, and perhaps that poses a > > > > > challenge for CI integration? > > > > > > > > > > Just wondering is there are some low-hanging fruit the can make things > > > > > easier for the SMT solver. > > > > > > > > This is how much memory the system had, but it didn't use it all :) > > > > When running the solver on a single core, I saw around 1GB of memory > > > > usage. With my changes to run on several cores, it can grow to a few > > > > GBs depending on the number of cores. > > > > > > > > -- > > > > Paul