On Mon, Oct 09, 2017 at 04:20:24PM +0100, Marc Zyngier wrote: > We currently tightly couple dcache clean with icache invalidation, > but KVM could do without the initial flush to PoU, as we've > already flushed things to PoC. > > Let's introduce invalidate_icache_range which is limited to > invalidating the icache from the linear mapping (and thus > has none of the userspace fault handling complexity), and > wire it in KVM instead of flush_icache_range. > > Signed-off-by: Marc Zyngier <marc.zyngier@xxxxxxx> > --- > arch/arm64/include/asm/cacheflush.h | 8 ++++++++ > arch/arm64/include/asm/kvm_mmu.h | 4 ++-- > arch/arm64/mm/cache.S | 24 ++++++++++++++++++++++++ > 3 files changed, 34 insertions(+), 2 deletions(-) [...] > diff --git a/arch/arm64/mm/cache.S b/arch/arm64/mm/cache.S > index 7f1dbe962cf5..0c330666a8c9 100644 > --- a/arch/arm64/mm/cache.S > +++ b/arch/arm64/mm/cache.S > @@ -80,6 +80,30 @@ USER(9f, ic ivau, x4 ) // invalidate I line PoU > ENDPROC(flush_icache_range) > ENDPROC(__flush_cache_user_range) > > +/* > + * invalidate_icache_range(start,end) > + * > + * Ensure that the I cache is invalid within specified region. This > + * assumes that this is done on the linear mapping. Do not use it > + * on a userspace range, as this may fault horribly. > + * > + * - start - virtual start address of region > + * - end - virtual end address of region > + */ > +ENTRY(invalidate_icache_range) > + icache_line_size x2, x3 > + sub x3, x2, #1 > + bic x4, x0, x3 > +1: > + ic ivau, x4 // invalidate I line PoU > + add x4, x4, x2 > + cmp x4, x1 > + b.lo 1b > + dsb ish > + isb > + ret > +ENDPROC(invalidate_icache_range) Is there a good reason not to make this work for user addresses? If it's as simple as adding a USER annotation and a fallback, then we should wrap that in a macro and reuse it for __flush_cache_user_range. Will _______________________________________________ kvmarm mailing list kvmarm@xxxxxxxxxxxxxxxxxxxxx https://lists.cs.columbia.edu/mailman/listinfo/kvmarm