On Thu, Feb 15, 2024 at 12:50 PM Alexei Starovoitov <alexei.starovoitov@xxxxxxxxx> wrote: > > > > > So propose an API that does that instead of exposing random low-level > > details. > > The generic_ioremap_prot() and vmap() APIs make sense for the cases > when phys memory exists with known size. It needs to vmap-ed and > not touched after. > bpf_arena use case is similar to kasan which > reserves a giant virtual memory region, and then > does apply_to_page_range() to populate certain pte-s with pages in that region, > and later apply_to_existing_page_range() to free pages in kasan's region. > > bpf_arena is very similar, except it currently calls get_vm_area() > to get a 4Gb+guard_pages region, and then vmap_pages_range() to > populate a page in it, and vunmap_range() to remove a page. > > These existing api-s work, so not sure what you're requesting. > I can guess many different things, but pls clarify to reduce > this back and forth. > Are you worried about range checking? That vmap_pages_range() > can accidently hit an unintended range? guessing... like this ? diff --git a/mm/vmalloc.c b/mm/vmalloc.c index d12a17fc0c17..3bc67b526272 100644 --- a/mm/vmalloc.c +++ b/mm/vmalloc.c @@ -635,6 +635,18 @@ static int vmap_pages_range(unsigned long addr, unsigned long end, return err; } + +int vm_area_map_pages(struct vm_struct *area, unsigned long addr, unsigned int count, + struct page **pages) +{ + unsigned long size = ((unsigned long)count) * PAGE_SIZE; + unsigned long end = addr + size; + + if (addr < (unsigned long)area->addr || (void *)end > area->addr + area->size) + return -EINVAL; + return vmap_pages_range(addr, end, PAGE_KERNEL, pages, PAGE_SHIFT); +} in addition.. can conditionally silence WARN_ON-s in vmap_pages_pte_range(), but imo overkill. What did you have in mind?