On Tue, Mar 01, 2016 at 02:37:58AM +0000, Paul Burton wrote: > The following patch will expose __update_cache to highmem pages. Handle > them by mapping them in for the duration of the cache maintenance, just > like in __flush_dcache_page. The code for that isn't shared because we > need the page address in __update_cache so sharing became messy. Given > that the entirity is an extra 5 lines, just duplicate it. > > Signed-off-by: Paul Burton <paul.burton@xxxxxxxxxx> > Cc: Lars Persson <lars.persson@xxxxxxxx> > Cc: stable <stable@xxxxxxxxxxxxxxx> # v4.1+ > --- > > arch/mips/mm/cache.c | 10 +++++++++- > 1 file changed, 9 insertions(+), 1 deletion(-) > > diff --git a/arch/mips/mm/cache.c b/arch/mips/mm/cache.c > index 5a67d8c..8befa55 100644 > --- a/arch/mips/mm/cache.c > +++ b/arch/mips/mm/cache.c > @@ -149,9 +149,17 @@ void __update_cache(struct vm_area_struct *vma, unsigned long address, > return; > page = pfn_to_page(pfn); > if (page_mapping(page) && Page_dcache_dirty(page)) { > - addr = (unsigned long) page_address(page); > + if (PageHighMem(page)) > + addr = (unsigned long)kmap_atomic(page); > + else > + addr = (unsigned long)page_address(page); > + > if (exec || pages_do_alias(addr, address & PAGE_MASK)) > flush_data_cache_page(addr); > + > + if (PageHighMem(page)) > + __kunmap_atomic((void *)addr); > + > ClearPageDcacheDirty(page); > } > } Yet again this is betting the house on getting the right virtual address from kmap_atomic. Ralf