On Thu, 5 Nov 2020 12:45:39 -0800, Andrii Nakryiko wrote: > That's not true. If you need new functionality like BTF, CO-RE, > function-by-function verification, etc., then yes, you have to update > kernel, compiler, libbpf, sometimes pahole. But if you have an BPF > application that doesn't use and need any of the newer features, it > will keep working just fine with the old kernel, old libbpf, and old > compiler. I'm fine with this. It doesn't work that well in practice, we've found ourselves chasing problems caused by llvm update (problems for older bpf programs, not new ones), problems on non-x86_64 caused by kernel updates, etc. It can be attributed to living on the edge and it should stabilize over time, hopefully. But it's still what the users are experiencing and it's probably what David is referring to. I expect it to smooth itself over time. Add to that the fact that something that is in fact a new feature is perceived as a bug fix by some users. For example, a perfectly valid and simple C program, not using anything shiny but a basic simple loop, compiles just fine but is rejected by the kernel. A newer kernel and a newer compiler and a newer libbpf and a newer pahole will cause the same program to be accepted. Now, the user does not see that for this, a new load of BTF functionality had to be added and all those mentioned projects enhanced with substantial code. All they see is their simple hello world test program did not work and now it does. I'm not saying I have a solution nor I'm saying you should do something about it. Just trying to explain the perception. Jiri