From: YiFei Zhu <zhuyifei@xxxxxxxxxx> To access the storage in a CGROUP_STORAGE map, one uses bpf_get_local_storage helper, which is extremely fast due to its use of per-CPU variables. However, its whole code is built on the assumption that one map can only be used by one program at any time, and this prohibits any sharing of data between multiple programs using these maps, eliminating a lot of use cases, such as some per-cgroup configuration storage, written to by a setsockopt program and read by a cg_sock_addr program. Why not use other map types? The great part of CGROUP_STORAGE map is that it is isolated by different cgroups its attached to. When one program uses bpf_get_local_storage, even on the same map, it gets different storages if it were run as a result of attaching to different cgroups. The kernel manages the storages, simplifying BPF program or userspace. In theory, one could probably use other maps like array or hash to do the same thing, but it would be a major overhead / complexity. Userspace needs to know when a cgroup is being freed in order to free up a space in the replacement map. This patch set introduces a significant change to the semantics of CGROUP_STORAGE map type. Instead of each storage being tied to one single attachment, it is shared across different attachments to the same cgroup, and persists until either the map or the cgroup attached to is being freed. The attach_type field of the map key struct bpf_cgroup_storage_key is now unused. If userspace reads it it will always be zero. If userspace sends us a non-zero value it will be ignored. How could this break existing users? * Users that uses detach & reattach / program replacement as a shortcut to zeroing the storage. Since we need sharing between programs, we cannot zero the storage. Users that expect this behavior should either attach a program with a new map, or explicitly zero the map with a syscall. * Programs that expect isolation from different attach types. In reality, attaching the same program to different attach types, relying on that expected_attach_type not being enforced, should rarely happen, if at all. * Userspace that does memcmp on the storage key when fetching map keys. In reality, if user wants to use a fixed key they would use {delete,lookup,update}_elem, rather than get_next_key. These cases are dependent on undocumented implementation details, so the impact should be very minimal. Patch 1 introduces a test on the old expected behavior of the map type. Patch 2 introduces a test showing how two programs cannot share one such map. Patch 3 implements the change of semantics to the map. Patch 4 amends the new test such that it yields the behavior we expect from the change. Patch 5 documents the map type. Changes since RFC: * Clarify commit message in patch 3 such that it says the lifetime of the storage is ended at the freeing of the cgroup_bpf, rather than the cgroup itself. * Restored an -ENOMEM check in __cgroup_bpf_attach. * Update selftests for recent change in network_helpers API. Changes since v1: * s/CHECK_FAIL/CHECK/ * s/bpf_prog_attach/bpf_program__attach_cgroup/ * Moved test__start_subtest to test_cg_storage_multi. * Removed some redundant CHECK_FAIL where they are already CHECK-ed. Changes since v2: * Lock cgroup_mutex during map_free. * Publish new storages only if attach is successful, by tracking exactly which storages are reused in an array of bools. * Mention bpftool map dump showing a value of zero for attach_type in patch 3 commit message. Changes since v3: * Use a much simpler lookup and allocate-if-not-exist from the fact that cgroup_mutex is locked during attach. * Removed an unnecessary spinlock hold. YiFei Zhu (5): selftests/bpf: Add test for CGROUP_STORAGE map on multiple attaches selftests/bpf: Test CGROUP_STORAGE map can't be used by multiple progs bpf: Make cgroup storages shared across attaches on the same cgroup selftests/bpf: Test CGROUP_STORAGE behavior on shared egress + ingress Documentation/bpf: Document CGROUP_STORAGE map type Documentation/bpf/index.rst | 9 + Documentation/bpf/map_cgroup_storage.rst | 95 +++++++ include/linux/bpf-cgroup.h | 15 +- include/uapi/linux/bpf.h | 2 +- kernel/bpf/cgroup.c | 69 +++-- kernel/bpf/core.c | 12 - kernel/bpf/local_storage.c | 73 ++--- tools/include/uapi/linux/bpf.h | 2 +- .../bpf/prog_tests/cg_storage_multi.c | 265 ++++++++++++++++++ .../selftests/bpf/progs/cg_storage_multi.h | 13 + .../progs/cg_storage_multi_egress_ingress.c | 45 +++ .../bpf/progs/cg_storage_multi_egress_only.c | 33 +++ 12 files changed, 536 insertions(+), 97 deletions(-) create mode 100644 Documentation/bpf/map_cgroup_storage.rst create mode 100644 tools/testing/selftests/bpf/prog_tests/cg_storage_multi.c create mode 100644 tools/testing/selftests/bpf/progs/cg_storage_multi.h create mode 100644 tools/testing/selftests/bpf/progs/cg_storage_multi_egress_ingress.c create mode 100644 tools/testing/selftests/bpf/progs/cg_storage_multi_egress_only.c -- 2.27.0