From: Alexei Starovoitov <ast@xxxxxxxxxx> BPF would like to use the vmap API to implement a lazily-populated memory space which can be shared by multiple userspace threads. The vmap API is generally public and has functions to request and release areas of kernel address space, as well as functions to map various types of backing memory into that space. For example, there is the public ioremap_page_range(), which is used to map device memory into addressable kernel space. The new BPF code needs the functionality of vmap_pages_range() in order to incrementally map privately managed arrays of pages into its vmap area. Indeed this function used to be public, but became private when usecases other than vmalloc happened to disappear. Make it public again for the new external user. The next commits will introduce bpf_arena which is a sparsely populated shared memory region between bpf program and user space process. It will map privately-managed pages into an existing vm area. It's the same pattern and layer of abstraction as ioremap_pages_range(). Signed-off-by: Alexei Starovoitov <ast@xxxxxxxxxx> --- include/linux/vmalloc.h | 2 ++ mm/vmalloc.c | 4 ++-- 2 files changed, 4 insertions(+), 2 deletions(-) diff --git a/include/linux/vmalloc.h b/include/linux/vmalloc.h index c720be70c8dd..bafb87c69e3d 100644 --- a/include/linux/vmalloc.h +++ b/include/linux/vmalloc.h @@ -233,6 +233,8 @@ static inline bool is_vm_area_hugepages(const void *addr) #ifdef CONFIG_MMU void vunmap_range(unsigned long addr, unsigned long end); +int vmap_pages_range(unsigned long addr, unsigned long end, + pgprot_t prot, struct page **pages, unsigned int page_shift); static inline void set_vm_flush_reset_perms(void *addr) { struct vm_struct *vm = find_vm_area(addr); diff --git a/mm/vmalloc.c b/mm/vmalloc.c index d12a17fc0c17..eae93d575d1b 100644 --- a/mm/vmalloc.c +++ b/mm/vmalloc.c @@ -625,8 +625,8 @@ int vmap_pages_range_noflush(unsigned long addr, unsigned long end, * RETURNS: * 0 on success, -errno on failure. */ -static int vmap_pages_range(unsigned long addr, unsigned long end, - pgprot_t prot, struct page **pages, unsigned int page_shift) +int vmap_pages_range(unsigned long addr, unsigned long end, + pgprot_t prot, struct page **pages, unsigned int page_shift) { int err; -- 2.34.1