On Wed, 2008-08-27 at 23:36 +1000, Nick Piggin wrote: > On Wednesday 27 August 2008 05:43, Eric Anholt wrote: > > The driver would like to map IO space directly for copying data in when > > appropriate, to avoid CPU cache flushing for streaming writes. > > kmap_atomic_pfn lets us avoid IPIs associated with ioremap for this > > process. > > > > Signed-off-by: Eric Anholt <eric@xxxxxxxxxx> > > --- > > arch/x86/mm/highmem_32.c | 1 + > > 1 files changed, 1 insertions(+), 0 deletions(-) > > > > diff --git a/arch/x86/mm/highmem_32.c b/arch/x86/mm/highmem_32.c > > index 165c871..d52e91d 100644 > > --- a/arch/x86/mm/highmem_32.c > > +++ b/arch/x86/mm/highmem_32.c > > @@ -137,6 +137,7 @@ void *kmap_atomic_pfn(unsigned long pfn, enum km_type > > type) > > > > return (void*) vaddr; > > } > > +EXPORT_SYMBOL(kmap_atomic_pfn); > > > > struct page *kmap_atomic_to_page(void *ptr) > > { > > > I wonder if you ever tested my vmap rework patches with this issue? It > seems somewhat x86 specific and also not conceptually so clean to use > kmap_atomic_pfn for this. vmap may not be used by all architectures but > I think it might be able to cover some of them. > > As I said, there are some other possible improvements that can be made > to my vmap rewrite if performance isn't good enough, but I simply have > not seen numbers... The consumer of this is a driver for Intel platforms, so being x86-specific is not a worry this patch series. However, when other DRM drivers get around to doing memory management, I'm sure they'll also be interested in an ioremap_wc that doesn't eat ipi costs. For us, the ipis for flushing were eating over 10% of CPU time. If your patch series cuts that cost, we could drop this piece at that point. -- Eric Anholt eric@xxxxxxxxxx eric.anholt@xxxxxxxxx
Attachment:
signature.asc
Description: This is a digitally signed message part