On Thu, Feb 08, 2024 at 06:44:35AM +0100, Johannes Weiner wrote: > On Wed, Feb 07, 2024 at 09:07:51PM +0000, Lorenzo Stoakes wrote: > > On Tue, Feb 06, 2024 at 02:04:28PM -0800, Alexei Starovoitov wrote: > > > From: Alexei Starovoitov <ast@xxxxxxxxxx> > > > > > > The next commit will introduce bpf_arena which is a sparsely populated shared > > > memory region between bpf program and user space process. > > > It will function similar to vmalloc()/vm_map_ram(): > > > - get_vm_area() > > > - alloc_pages() > > > - vmap_pages_range() > > > > This tells me absolutely nothing about why it is justified to expose this > > internal interface. You need to put more explanation here along the lines > > of 'we had no other means of achieving what we needed from vmalloc because > > X, Y, Z and are absolutely convinced it poses no risk of breaking anything'. > > How about this: > > --- > > BPF would like to use the vmap API to implement a lazily-populated > memory space which can be shared by multiple userspace threads. > > The vmap API is generally public and has functions to request and > release areas of kernel address space, as well as functions to map > various types of backing memory into that space. > > For example, there is the public ioremap_page_range(), which is used > to map device memory into addressable kernel space. > > The new BPF code needs the functionality of vmap_pages_range() in > order to incrementally map privately managed arrays of pages into its > vmap area. Indeed this function used to be public, but became private > when usecases other than vmalloc happened to disappear. > > Make it public again for the new external user. Thanks yes this is much better! > > --- > > > I mean I see a lot of checks in vmap() that aren't in vmap_pages_range() > > for instance. We good to expose that, not only for you but for any other > > core kernel users? > > Those are applicable only to the higher-level vmap/vmalloc usecases: > controlling the implied call to get_vm_area; managing the area with > vfree(). They're not relevant for mapping privately-managed pages into > an existing vm area. It's the same pattern and layer of abstraction as > ioremap_pages_range(), which doesn't have any of those checks either. OK that makes more sense re: comparison to ioremap_page_range(). My concern arises from a couple things - firstly to avoid the exposure of an interface that might be misinterpreted as acting as if it were a standard vmap() when it instead skips a lot of checks (e.g. count > totalram_pages()). Secondly my concern is that this side-steps metadata tracking the use of the vmap range doesn't it? So there is nothing something coming along and remapping some other vmalloc memory into that range later right? It feels like exposing page table code that sits outside of the whole vmalloc mechanism for other users. On the other hand... since we already expose ioremap_page_range() and that has the exact same issue I guess it's moot anyway?