On Thu, Aug 29, 2019 at 05:03:47PM +0530, Viresh Kumar wrote: > From: Robin Murphy <robin.murphy@xxxxxxx> > > commit 022620eed3d0bc4bf2027326f599f5ad71c2ea3f upstream. > > Provide an optimised, assembly implementation of array_index_mask_nospec() > for arm64 so that the compiler is not in a position to transform the code > in ways which affect its ability to inhibit speculation (e.g. by introducing > conditional branches). > > This is similar to the sequence used by x86, modulo architectural differences > in the carry/borrow flags. > > Reviewed-by: Mark Rutland <mark.rutland@xxxxxxx> > Signed-off-by: Robin Murphy <robin.murphy@xxxxxxx> > Signed-off-by: Will Deacon <will.deacon@xxxxxxx> > Signed-off-by: Catalin Marinas <catalin.marinas@xxxxxxx> > Signed-off-by: Viresh Kumar <viresh.kumar@xxxxxxxxxx> Reviewed-by: Mark Rutland <mark.rutland@xxxxxxx> [v4.4 backport] Mark. > --- > arch/arm64/include/asm/barrier.h | 21 +++++++++++++++++++++ > 1 file changed, 21 insertions(+) > > diff --git a/arch/arm64/include/asm/barrier.h b/arch/arm64/include/asm/barrier.h > index 574486634c62..7c25e3e11b6d 100644 > --- a/arch/arm64/include/asm/barrier.h > +++ b/arch/arm64/include/asm/barrier.h > @@ -37,6 +37,27 @@ > #define dma_rmb() dmb(oshld) > #define dma_wmb() dmb(oshst) > > +/* > + * Generate a mask for array_index__nospec() that is ~0UL when 0 <= idx < sz > + * and 0 otherwise. > + */ > +#define array_index_mask_nospec array_index_mask_nospec > +static inline unsigned long array_index_mask_nospec(unsigned long idx, > + unsigned long sz) > +{ > + unsigned long mask; > + > + asm volatile( > + " cmp %1, %2\n" > + " sbc %0, xzr, xzr\n" > + : "=r" (mask) > + : "r" (idx), "Ir" (sz) > + : "cc"); > + > + csdb(); > + return mask; > +} > + > #define smp_mb() dmb(ish) > #define smp_rmb() dmb(ishld) > #define smp_wmb() dmb(ishst) > -- > 2.21.0.rc0.269.g1a574e7a288b >