On Tue, Jun 15, 2021 at 03:24:39PM +0200, Christoph Hellwig wrote: > Add a helper that calls flush_kernel_dcache_page before unmapping the > local mapping. flush_kernel_dcache_page is required for all pages > potentially mapped into userspace that were written to using kmap*, > so having a helper that does the right thing can be very convenient. > > Signed-off-by: Christoph Hellwig <hch@xxxxxx> > --- > include/linux/highmem-internal.h | 7 +++++++ > include/linux/highmem.h | 4 ++++ > 2 files changed, 11 insertions(+) > > diff --git a/include/linux/highmem-internal.h b/include/linux/highmem-internal.h > index 7902c7d8b55f..bd37706db147 100644 > --- a/include/linux/highmem-internal.h > +++ b/include/linux/highmem-internal.h > @@ -224,4 +224,11 @@ do { \ > __kunmap_local(__addr); \ > } while (0) > > +#define kunmap_local_dirty(__page, __addr) \ I think having to store the page and addr to return to kunmap_local_dirty() is going to be a pain in some code paths. Not a show stopper but see below... > +do { \ > + if (!PageSlab(__page)) \ Was there some clarification why the page can't be a Slab page? Or is this just an optimization? > + flush_kernel_dcache_page(__page); \ Is this required on 32bit systems? Why is kunmap_flush_on_unmap() not sufficient on 64bit systems? The normal kunmap_local() path does that. I'm sorry but I did not see a conclusion to my query on V1. Herbert implied the he just copied from the crypto code.[1] I'm concerned that this _dirty() call is just going to confuse the users of kmap even more. So why can't we get to the bottom of why flush_kernel_dcache_page() needs so much logic around it before complicating the general kernel users. I would like to see it go away if possible. Ira [1] https://lore.kernel.org/lkml/20210615050258.GA5208@xxxxxxxxxxxxxxxxxxx/ > + kunmap_local(__addr); \ > +} while (0) > + > #endif > diff --git a/include/linux/highmem.h b/include/linux/highmem.h > index 832b49b50c7b..65f548db4f2d 100644 > --- a/include/linux/highmem.h > +++ b/include/linux/highmem.h > @@ -93,6 +93,10 @@ static inline void kmap_flush_unused(void); > * On HIGHMEM enabled systems mapping a highmem page has the side effect of > * disabling migration in order to keep the virtual address stable across > * preemption. No caller of kmap_local_page() can rely on this side effect. > + * > + * If data is written to the returned kernel mapping, the callers needs to > + * unmap the mapping using kunmap_local_dirty(), else kunmap_local() should > + * be used. > */ > static inline void *kmap_local_page(struct page *page); > > -- > 2.30.2 >