On 2/10/22 5:51 PM, Song Liu wrote:
On Feb 10, 2022, at 12:25 AM, Daniel Borkmann <daniel@xxxxxxxxxxxxx> wrote:
On 2/10/22 7:41 AM, Song Liu wrote:
bpf_prog_pack uses huge pages to reduce pressue on instruction TLB.
To guarantee allocating huge pages for bpf_prog_pack, it is necessary to
allocate memory of size PMD_SIZE * num_online_nodes().
On the other hand, if the system doesn't support huge pages, it is more
efficient to allocate PAGE_SIZE bpf_prog_pack.
Address different scenarios with more flexible bpf_prog_pack_size().
Signed-off-by: Song Liu <song@xxxxxxxxxx>
---
kernel/bpf/core.c | 47 +++++++++++++++++++++++++++--------------------
1 file changed, 27 insertions(+), 20 deletions(-)
diff --git a/kernel/bpf/core.c b/kernel/bpf/core.c
index 42d96549a804..d961a1f07a13 100644
--- a/kernel/bpf/core.c
+++ b/kernel/bpf/core.c
@@ -814,46 +814,53 @@ int bpf_jit_add_poke_descriptor(struct bpf_prog *prog,
* allocator. The prog_pack allocator uses HPAGE_PMD_SIZE page (2MB on x86)
* to host BPF programs.
*/
-#ifdef CONFIG_TRANSPARENT_HUGEPAGE
-#define BPF_PROG_PACK_SIZE HPAGE_PMD_SIZE
-#else
-#define BPF_PROG_PACK_SIZE PAGE_SIZE
-#endif
#define BPF_PROG_CHUNK_SHIFT 6
#define BPF_PROG_CHUNK_SIZE (1 << BPF_PROG_CHUNK_SHIFT)
#define BPF_PROG_CHUNK_MASK (~(BPF_PROG_CHUNK_SIZE - 1))
-#define BPF_PROG_CHUNK_COUNT (BPF_PROG_PACK_SIZE / BPF_PROG_CHUNK_SIZE)
struct bpf_prog_pack {
struct list_head list;
void *ptr;
- unsigned long bitmap[BITS_TO_LONGS(BPF_PROG_CHUNK_COUNT)];
+ unsigned long bitmap[];
};
-#define BPF_PROG_MAX_PACK_PROG_SIZE BPF_PROG_PACK_SIZE
#define BPF_PROG_SIZE_TO_NBITS(size) (round_up(size, BPF_PROG_CHUNK_SIZE) / BPF_PROG_CHUNK_SIZE)
static DEFINE_MUTEX(pack_mutex);
static LIST_HEAD(pack_list);
+static inline int bpf_prog_pack_size(void)
+{
+ /* If vmap_allow_huge == true, use pack size of the smallest
+ * possible vmalloc huge page: PMD_SIZE * num_online_nodes().
+ * Otherwise, use pack size of PAGE_SIZE.
+ */
+ return get_vmap_allow_huge() ? PMD_SIZE * num_online_nodes() : PAGE_SIZE;
+}
Imho, this is making too many assumptions about implementation details. Can't we
just add a new module_alloc*() API instead which internally guarantees allocating
huge pages when enabled/supported (e.g. with a __weak function as fallback)?
I agree that this is making too many assumptions. But a new module_alloc_huge()
may not work, because we need the caller to know the proper size to ask for.
(Or maybe I misunderstood your suggestion?)
How about we introduce something like
/* minimal size to get huge pages from vmalloc. If not possible,
* return 0 (or -1?)
*/
int vmalloc_hpage_min_size(void)
{
return vmap_allow_huge ? PMD_SIZE * num_online_nodes() : 0;
}
And that would live inside mm/vmalloc.c and is exported to users ...
/* minimal size to get huge pages from module_alloc */
int module_alloc_hpage_min_size(void)
{
return vmalloc_hpage_min_size();
}
... and this one as wrapper in module alloc infra with __weak attr?
static inline int bpf_prog_pack_size(void)
{
return module_alloc_hpage_min_size() ? : PAGE_SIZE;
}
Could probably work. It's not nice, but at least in the corresponding places so it's
not exposed / hard coded inside bpf and assuming implementation details which could
potentially break later on.
Thanks,
Daniel