On Fri, Oct 30 2020 at 13:06, Matthew Wilcox wrote: > On Thu, Oct 29, 2020 at 11:18:06PM +0100, Thomas Gleixner wrote: >> This series provides kmap_local.* iomap_local variants which only disable >> migration to keep the virtual mapping address stable accross preemption, >> but do neither disable pagefaults nor preemption. The new functions can be >> used in any context, but if used in atomic context the caller has to take >> care of eventually disabling pagefaults. > > Could I ask for a CONFIG_KMAP_DEBUG which aliases all the kmap variants > to vmap()? I think we currently have a problem in iov_iter on HIGHMEM > configs: For kmap() that would work, but for kmap_atomic() not so much when it is called in non-preemptible context because vmap() might sleep. > copy_page_to_iter() calls page_copy_sane() which checks: > > head = compound_head(page); > if (likely(n <= v && v <= page_size(head))) > return true; > > but then: > > void *kaddr = kmap_atomic(page); > size_t wanted = copy_to_iter(kaddr + offset, bytes, i); > kunmap_atomic(kaddr); > > so if offset to offset+bytes is larger than PAGE_SIZE, this is going to > work for lowmem pages and fail miserably for highmem pages. I suggest > vmap() because vmap has a PAGE_SIZE gap between each allocation. On 32bit highmem the kmap_atomic() case is easy: Double the number of mapping slots and only use every second one, which gives you a guard page between the maps. For 64bit we could do something ugly: Enable the highmem kmap_atomic() crud and enforce an alias mapping (at least on the architectures where this is reasonable). Then you get the same as for 32bit. > Alternatively if we could have a kmap_atomic_compound(), that would > be awesome, but probably not realistic to implement. I've more > or less resigned myself to having to map things one page at a time. That might be horribly awesome on 32bit :) Thanks, tglx