On Mon, Oct 23, 2023 at 8:52 AM Paul Chaignon <paul@xxxxxxxxxxxxx> wrote: > > On Mon, Oct 23, 2023 at 10:05:41PM +0800, Shung-Hsi Yu wrote: > > On Sat, Oct 21, 2023 at 09:42:46PM -0700, Andrii Nakryiko wrote: > > > On Fri, Oct 20, 2023 at 10:37 AM Srinivas Narayana Ganapathy > > > <sn624@xxxxxxxxxxxxxx> wrote: > > > > > > > > Hi all, > > > > > > > > Thanks, @Shung-Hsi, for bringing up this conversation about > > > > integrating formal verification approaches into the BPF CI and testing. > > > > > > > > > On 19-Oct-2023, at 1:34 PM, Andrii Nakryiko <andrii.nakryiko@xxxxxxxxx> wrote: > > > > > On Thu, Oct 19, 2023 at 12:52 AM Shung-Hsi Yu <shung-hsi.yu@xxxxxxxx> wrote: > > > > >> On Thu, Oct 19, 2023 at 03:30:33PM +0800, Shung-Hsi Yu wrote: > > [...] > > > > > >>> FWIW an alternative approach that speeds things up is to use model checkers > > > > >>> like Z3 or CBMC. On my laptop, using Z3 to validate tnum_add() against *all* > > > > >>> possible inputs takes less than 1.3 seconds[3] (based on code from [1] > > > > >>> paper, but I somehow lost the link to their GitHub repository). > > > > >> > > > > >> Found it. For reference, code used in "Sound, Precise, and Fast Abstract > > > > >> Interpretation with Tristate Numbers"[1] can be found at > > > > >> https://github.com/bpfverif/tnums-cgo22/blob/main/verification/tnum.py > > > > >> > > > > >> Below is a truncated form of the above that only check tnum_add(), requires > > > > >> a package called python3-z3 on most distros: > > > > > > > > > > Great! I'd be curious to see how range tracking logic can be encoded > > > > > using this approach, please give it a go! > > > > > > > > We have some recent work that applies formal verification approaches > > > > to the entirety of range tracking in the eBPF verifier. We posted a > > > > note to the eBPF mailing list about it sometime ago: > > > > > > > > [1] https://lore.kernel.org/bpf/SJ2PR14MB6501E906064EE19F5D1666BFF93BA@xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx/T/#u > > > > > > Oh, I totally missed this, as I just went on a long vacation a few > > > days before that and declared email bankruptcy afterwards. I'll try to > > > give it a read, though I see lots of math symbols there and make no > > > promises ;) > > > > Feels the same when I start reading their previous work, but I can vouch > > their work their work are definitely worth the read. (Though I had to admit > > I secretly chant "math is easier than code, math is easier than code" to > > convincing my mind to not go into flight mode when seeing math symbols ;D > > Hari et al. did a great job at explaining the intuitions throughout the > paper. So even if you skip the math, you should be able to follow. > > Having an understanding of abstract interpretation helps. The Mozilla > wiki has a great one [1] and I wrote a shorter BPF example of it [2]. > > 1 - https://wiki.mozilla.org/Abstract_Interpretation > 2 - https://pchaigno.github.io/abstract-interpretation.html > thanks :) > > > > > > Our paper, also posted on [1], appeared at Computer Aided Verification (CAV)’23. > > > > > > > > [2] https://people.cs.rutgers.edu/~sn624/papers/agni-cav23.pdf > > > > > > > > Together with @Paul Chaignon and @Harishankar Vishwanathan (CC'ed), we > > > > are working to get our tooling into a form that is integrable into BPF > > > > CI. We will look forward to your feedback when we post patches. > > > > > > If this could be integrated in a way that we can regularly run this > > > and validate latest version of verifier, that would be great. I have a > > > second part of verifier changes coming up that extends range tracking > > > logic further to support range vs range (as opposed to range vs const > > > that we do currently) comparisons and is_branch_taken, so having > > > independent and formal verification of these changes would be great! > > The current goal is to have this running somewhere regularly (maybe > releases + manual triggers) in a semi-automated fashion. The two > challenges today are the time it takes to run verification (days without > parallelization) and whether the bit of conversion & glue code will be > maintanable long term. > > I'm fairly optimistic on the first as we're already down to hours with > basic parallelization. The second is harder to predict, but I guess your > patches will be a good exercice :) > > I've already ran the verification on v6.0 to v6.3; v6.4 is currently > running. Hari et al. had verified v4.14 to v5.19 before. I'll give it a > try on this patchset afterward. Cool, that's great! The second part of this work will be generalizing this logic in kernel to support range vs range comparisons, so I'd appreciate it if you could validate that one as well. I'm finalizing it, but will wait for this patch set to land first before posting second part to have a proper CI testing runs (and limit amount of code review to be done). BTW, I've since did some more changes to this "selftests" to be a bit more parallelizable, so this range_vs_consts set of tests now can run in about 5 minutes on 8+ core QEMU instance. In the second part we'll have range-vs-range, so we have about 106 million cases and it takes slightly more than 8 hours single-threaded. But with parallelization, it's done in slightly more than one hour. So, of course, still too slow to run as part of normal test_progs run, but definitely easy to run locally to validate kernel changes (and probably makes sense to enable on some nightly CI runs, when we have them). Regardless, my point is that both methods of verification are complementary, I think, and it's good to have both available and working on latest kernel versions. > > > > > +1 (from a quick skim) this work is already great as-is, and it'd be even > > better once it get's in the CI. From the paper there's this > > > > We conducted our experiments on ... a machine with two 10-core Intel > > Skylake CPUs running at 2.20 GHz with 192 GB of memory... > > > > I suppose the memory requirement comes from the vast amount of state space > > that the Z3 SMT solver have to go through, and perhaps that poses a > > challenge for CI integration? > > > > Just wondering is there are some low-hanging fruit the can make things > > easier for the SMT solver. > > This is how much memory the system had, but it didn't use it all :) > When running the solver on a single core, I saw around 1GB of memory > usage. With my changes to run on several cores, it can grow to a few > GBs depending on the number of cores. > > -- > Paul