From: Alexei Starovoitov <ast@xxxxxxxxxx> vmap() API is used to map a set of pages into contiguous kernel virtual space. BPF would like to extend the vmap API to implement a lazily-populated contiguous kernel virtual space which size and start address is fixed early. The vmap API has functions to request and release areas of kernel address space: get_vm_area() and free_vm_area(). Introduce vm_area_map_pages(area, start_addr, count, pages) to map a set of pages within a given area. It has the same sanity checks as vmap() does. In addition it also checks that get_vm_area() was created with VM_MAP flag (as all users of vmap() should be doing). Also add vm_area_unmap_pages() that is a safer alternative to existing vunmap_range() api. The next commits will introduce bpf_arena which is a sparsely populated shared memory region between bpf program and user space process. It will map privately-managed pages into an existing vm area with the following steps: area = get_vm_area(area_size, VM_MAP | VM_USERMAP); // at bpf prog verification time vm_area_map_pages(area, kaddr, 1, page); // on demand vm_area_unmap_pages(area, kaddr, 1); free_vm_area(area); // after bpf prog is unloaded For BPF use case the area_size will be 4Gbyte plus 64Kbyte of guard pages and area->addr known and fixed at the program verification time. Signed-off-by: Alexei Starovoitov <ast@xxxxxxxxxx> --- include/linux/vmalloc.h | 3 +++ mm/vmalloc.c | 46 +++++++++++++++++++++++++++++++++++++++++ 2 files changed, 49 insertions(+) diff --git a/include/linux/vmalloc.h b/include/linux/vmalloc.h index c720be70c8dd..7d112cc5f2a3 100644 --- a/include/linux/vmalloc.h +++ b/include/linux/vmalloc.h @@ -232,6 +232,9 @@ static inline bool is_vm_area_hugepages(const void *addr) } #ifdef CONFIG_MMU +int vm_area_map_pages(struct vm_struct *area, unsigned long addr, unsigned int count, + struct page **pages); +int vm_area_unmap_pages(struct vm_struct *area, unsigned long addr, unsigned int count); void vunmap_range(unsigned long addr, unsigned long end); static inline void set_vm_flush_reset_perms(void *addr) { diff --git a/mm/vmalloc.c b/mm/vmalloc.c index d12a17fc0c17..d6337d46f1d8 100644 --- a/mm/vmalloc.c +++ b/mm/vmalloc.c @@ -635,6 +635,52 @@ static int vmap_pages_range(unsigned long addr, unsigned long end, return err; } +/** + * vm_area_map_pages - map pages inside given vm_area + * @area: vm_area + * @addr: start address inside vm_area + * @count: number of pages + * @pages: pages to map (always PAGE_SIZE pages) + */ +int vm_area_map_pages(struct vm_struct *area, unsigned long addr, unsigned int count, + struct page **pages) +{ + unsigned long size = ((unsigned long)count) * PAGE_SIZE; + unsigned long end = addr + size; + + might_sleep(); + if (WARN_ON_ONCE(area->flags & VM_FLUSH_RESET_PERMS)) + return -EINVAL; + if (WARN_ON_ONCE(area->flags & VM_NO_GUARD)) + return -EINVAL; + if (WARN_ON_ONCE(!(area->flags & VM_MAP))) + return -EINVAL; + if (count > totalram_pages()) + return -E2BIG; + if (addr < (unsigned long)area->addr || (void *)end > area->addr + area->size) + return -ERANGE; + + return vmap_pages_range(addr, end, PAGE_KERNEL, pages, PAGE_SHIFT); +} + +/** + * vm_area_unmap_pages - unmap pages inside given vm_area + * @area: vm_area + * @addr: start address inside vm_area + * @count: number of pages to unmap + */ +int vm_area_unmap_pages(struct vm_struct *area, unsigned long addr, unsigned int count) +{ + unsigned long size = ((unsigned long)count) * PAGE_SIZE; + unsigned long end = addr + size; + + if (addr < (unsigned long)area->addr || (void *)end > area->addr + area->size) + return -ERANGE; + + vunmap_range(addr, end); + return 0; +} + int is_vmalloc_or_module_addr(const void *x) { /* -- 2.34.1