On Thu, Nov 28, 2024 at 08:47:28PM +0000, Lorenzo Stoakes wrote: > Peter - not sure whether it's easy for you to make a simple adjustment to this > patch or if you want me to just send a v2, but I have to pop an #ifdef CONFIG_MMU > into the code. > > > +static int map_range(struct perf_buffer *rb, struct vm_area_struct *vma) > > +{ > > + unsigned long nr_pages = vma_pages(vma); > > + int err = 0; > > + unsigned long pgoff; > > + > > + for (pgoff = 0; pgoff < nr_pages; pgoff++) { > > + unsigned long va = vma->vm_start + PAGE_SIZE * pgoff; > > + struct page *page = perf_mmap_to_page(rb, pgoff); > > + > > + if (page == NULL) { > > + err = -EINVAL; > > + break; > > + } > > + > > + /* Map readonly, perf_mmap_pfn_mkwrite() called on write fault. */ > > + err = remap_pfn_range(vma, va, page_to_pfn(page), PAGE_SIZE, > > + vm_get_page_prot(vma->vm_flags & ~VM_SHARED)); > > + if (err) > > + break; > > + } > > + > > Need a: > > #ifdef CONFIG_MMU > > + /* Clear any partial mappings on error. */ > > + if (err) > > + zap_page_range_single(vma, vma->vm_start, nr_pages * PAGE_SIZE, NULL); > #endif > > Here to work around the wonders of nommu :) All good, I'll edit the thing.