From: Matt Sealey <matt.sealey@xxxxxxx> Subject: lib/lzo: fast 8-byte copy on arm64 Enable faster 8-byte copies on arm64. Link: http://lkml.kernel.org/r/20181127161913.23863-6-dave.rodgman@xxxxxxx Link: http://lkml.kernel.org/r/20190205141950.9058-4-dave.rodgman@xxxxxxx Signed-off-by: Matt Sealey <matt.sealey@xxxxxxx> Signed-off-by: Dave Rodgman <dave.rodgman@xxxxxxx> Cc: David S. Miller <davem@xxxxxxxxxxxxx> Cc: Greg Kroah-Hartman <gregkh@xxxxxxxxxxxxxxxxxxx> Cc: Herbert Xu <herbert@xxxxxxxxxxxxxxxxxxx> Cc: Markus F.X.J. Oberhumer <markus@xxxxxxxxxxxxx> Cc: Minchan Kim <minchan@xxxxxxxxxx> Cc: Nitin Gupta <nitingupta910@xxxxxxxxx> Cc: Richard Purdie <rpurdie@xxxxxxxxxxxxxx> Cc: Sergey Senozhatsky <sergey.senozhatsky.work@xxxxxxxxx> Cc: Sonny Rao <sonnyrao@xxxxxxxxxx> Cc: Stephen Rothwell <sfr@xxxxxxxxxxxxxxxx> Signed-off-by: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx> --- --- a/lib/lzo/lzodefs.h~lib-lzo-fast-8-byte-copy-on-arm64 +++ a/lib/lzo/lzodefs.h @@ -15,7 +15,7 @@ #define COPY4(dst, src) \ put_unaligned(get_unaligned((const u32 *)(src)), (u32 *)(dst)) -#if defined(CONFIG_X86_64) +#if defined(CONFIG_X86_64) || defined(CONFIG_ARM64) #define COPY8(dst, src) \ put_unaligned(get_unaligned((const u64 *)(src)), (u64 *)(dst)) #else _