On Tue, Aug 16, 2022 at 07:08:22PM +0100, Matthew Wilcox wrote: > For these reasons, I proposing the logical equivalent to this: > > +void *folio_map_local(struct folio *folio) > +{ > + if (!IS_ENABLED(CONFIG_HIGHMEM)) > + return folio_address(folio); > + if (!folio_test_large(folio)) > + return kmap_local_page(&folio->page); > + return vmap_folio(folio); > +} > > (where vmap_folio() is a new function that works a lot like vmap(), > chunks of this get moved out-of-line, etc, etc., but this concept) This vmap_folio() compiles but is otherwise untested. Anything I obviously got wrong here? diff --git a/mm/vmalloc.c b/mm/vmalloc.c index dd6cdb201195..1867759c33ff 100644 --- a/mm/vmalloc.c +++ b/mm/vmalloc.c @@ -2848,6 +2848,42 @@ void *vmap(struct page **pages, unsigned int count, } EXPORT_SYMBOL(vmap); +#ifdef CONFIG_HIGHMEM +/** + * vmap_folio - Map an entire folio into virtually contiguous space + * @folio: The folio to map. + * + * Maps all pages in @folio into contiguous kernel virtual space. This + * function is only available in HIGHMEM builds; for !HIGHMEM, use + * folio_address(). The pages are mapped with PAGE_KERNEL permissions. + * + * Return: The address of the area or %NULL on failure + */ +void *vmap_folio(struct folio *folio) +{ + size_t size = folio_size(folio); + struct vm_struct *area; + unsigned long addr; + + might_sleep(); + + area = get_vm_area_caller(size, VM_MAP, __builtin_return_address(0)); + if (!area) + return NULL; + + addr = (unsigned long)area->addr; + if (vmap_range_noflush(addr, addr + size, + folio_pfn(folio) << PAGE_SHIFT, + PAGE_KERNEL, folio_shift(folio))) { + vunmap(area->addr); + return NULL; + } + flush_cache_vmap(addr, addr + size); + + return area->addr; +} +#endif + #ifdef CONFIG_VMAP_PFN struct vmap_pfn_data { unsigned long *pfns;