Changes to bpf.h tend to clog up our build systems. The netdev/bpf build bot does incremental builds to save time (reusing the build directory to only rebuild changed objects). This is the rough breakdown of how many objects needs to be rebuilt based on file touched: kernel.h 40633 bpf.h 17881 bpf-cgroup.h 17875 skbuff.h 10696 bpf-netns.h 7604 netdevice.h 7452 filter.h 5003 tcp.h 4048 sock.h 4959 As the stats show touching bpf.h is _very_ expensive. Bulk of the objects get rebuilt because MM includes cgroup headers. Luckily bpf-cgroup.h does not fundamentally depend on bpf.h so we can break that dependency and reduce the number of objects. With the patches applied touching bpf.h causes 5019 objects to be rebuilt (17881 / 5019 = 3.56x). That's pretty much down to filter.h plus noise. v2: Try to make the new headers wider in scope. Collapse bpf-link and bpf-cgroup-types into one header, which may serve as "BPF kernel API" header in the future if needed. Rename bpf-cgroup-storage.h to bpf-inlines.h. Add a fix for the s390 build issue. v3: https://lore.kernel.org/all/20211215061916.715513-1-kuba@xxxxxxxxxx/ Merge bpf-includes.h into bpf.h. Remember to git format-patch after fixing build issues. v4: Change course - break off cgroup instead of breaking off bpf. Jakub Kicinski (3): add includes masked by cgroup -> bpf dependency add missing bpf-cgroup.h includes bpf: remove the cgroup -> bpf header dependecy arch/s390/mm/hugetlbpage.c | 1 + include/linux/bpf-cgroup-defs.h | 70 +++++++++++++++++++++++++++++++++ include/linux/bpf-cgroup.h | 57 +-------------------------- include/linux/cgroup-defs.h | 2 +- kernel/bpf/helpers.c | 1 + kernel/bpf/syscall.c | 1 + kernel/bpf/verifier.c | 1 + kernel/cgroup/cgroup.c | 1 + kernel/trace/trace_kprobe.c | 1 + kernel/trace/trace_uprobe.c | 1 + net/ipv4/udp.c | 1 + net/ipv6/udp.c | 1 + net/socket.c | 1 + security/device_cgroup.c | 1 + 14 files changed, 83 insertions(+), 57 deletions(-) create mode 100644 include/linux/bpf-cgroup-defs.h -- 2.31.1