This is a note to let you know that I've just added the patch titled sparc64: Fix FPU register corruption with AES crypto offload. to the 3.16-stable tree which can be found at: http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=summary The filename of the patch is: sparc64-fix-fpu-register-corruption-with-aes-crypto-offload.patch and it can be found in the queue-3.16 subdirectory. If you, or anyone else, feels it should not be added to the stable tree, please let <stable@xxxxxxxxxxxxxxx> know about it. >From foo@baz Tue Oct 28 11:19:22 CST 2014 From: "David S. Miller" <davem@xxxxxxxxxxxxx> Date: Tue, 14 Oct 2014 19:37:58 -0700 Subject: sparc64: Fix FPU register corruption with AES crypto offload. From: "David S. Miller" <davem@xxxxxxxxxxxxx> [ Upstream commit f4da3628dc7c32a59d1fb7116bb042e6f436d611 ] The AES loops in arch/sparc/crypto/aes_glue.c use a scheme where the key material is preloaded into the FPU registers, and then we loop over and over doing the crypt operation, reusing those pre-cooked key registers. There are intervening blkcipher*() calls between the crypt operation calls. And those might perform memcpy() and thus also try to use the FPU. The sparc64 kernel FPU usage mechanism is designed to allow such recursive uses, but with a catch. There has to be a trap between the two FPU using threads of control. The mechanism works by, when the FPU is already in use by the kernel, allocating a slot for FPU saving at trap time. Then if, within the trap handler, we try to use the FPU registers, the pre-trap FPU register state is saved into the slot. Then at trap return time we notice this and restore the pre-trap FPU state. Over the long term there are various more involved ways we can make this work, but for a quick fix let's take advantage of the fact that the situation where this happens is very limited. All sparc64 chips that support the crypto instructiosn also are using the Niagara4 memcpy routine, and that routine only uses the FPU for large copies where we can't get the source aligned properly to a multiple of 8 bytes. We look to see if the FPU is already in use in this context, and if so we use the non-large copy path which only uses integer registers. Furthermore, we also limit this special logic to when we are doing kernel copy, rather than a user copy. Signed-off-by: David S. Miller <davem@xxxxxxxxxxxxx> Signed-off-by: Greg Kroah-Hartman <gregkh@xxxxxxxxxxxxxxxxxxx> --- arch/sparc/include/asm/visasm.h | 8 ++++++++ arch/sparc/lib/NG4memcpy.S | 14 +++++++++++++- 2 files changed, 21 insertions(+), 1 deletion(-) --- a/arch/sparc/include/asm/visasm.h +++ b/arch/sparc/include/asm/visasm.h @@ -39,6 +39,14 @@ 297: wr %o5, FPRS_FEF, %fprs; \ 298: +#define VISEntryHalfFast(fail_label) \ + rd %fprs, %o5; \ + andcc %o5, FPRS_FEF, %g0; \ + be,pt %icc, 297f; \ + nop; \ + ba,a,pt %xcc, fail_label; \ +297: wr %o5, FPRS_FEF, %fprs; + #define VISExitHalf \ wr %o5, 0, %fprs; --- a/arch/sparc/lib/NG4memcpy.S +++ b/arch/sparc/lib/NG4memcpy.S @@ -41,6 +41,10 @@ #endif #endif +#if !defined(EX_LD) && !defined(EX_ST) +#define NON_USER_COPY +#endif + #ifndef EX_LD #define EX_LD(x) x #endif @@ -197,9 +201,13 @@ FUNC_NAME: /* %o0=dst, %o1=src, %o2=len mov EX_RETVAL(%o3), %o0 .Llarge_src_unaligned: +#ifdef NON_USER_COPY + VISEntryHalfFast(.Lmedium_vis_entry_fail) +#else + VISEntryHalf +#endif andn %o2, 0x3f, %o4 sub %o2, %o4, %o2 - VISEntryHalf alignaddr %o1, %g0, %g1 add %o1, %o4, %o1 EX_LD(LOAD(ldd, %g1 + 0x00, %f0)) @@ -240,6 +248,10 @@ FUNC_NAME: /* %o0=dst, %o1=src, %o2=len nop ba,a,pt %icc, .Lmedium_unaligned +#ifdef NON_USER_COPY +.Lmedium_vis_entry_fail: + or %o0, %o1, %g2 +#endif .Lmedium: LOAD(prefetch, %o1 + 0x40, #n_reads_strong) andcc %g2, 0x7, %g0 Patches currently in stable-queue which might be from davem@xxxxxxxxxxxxx are queue-3.16/sparc64-adjust-vmalloc-region-size-based-upon-available-virtual-address-bits.patch queue-3.16/sparc64-fix-fpu-register-corruption-with-aes-crypto-offload.patch queue-3.16/sparc64-move-request_irq-from-ldc_bind-to-ldc_alloc.patch queue-3.16/sparc32-dma_alloc_coherent-must-honour-gfp-flags.patch queue-3.16/sparc64-kill-unnecessary-tables-and-increase-max_banks.patch queue-3.16/sparc-let-memset-return-the-address-argument.patch queue-3.16/sparc64-use-kernel-page-tables-for-vmemmap.patch queue-3.16/sparc64-sparse-irq.patch queue-3.16/sparc64-fix-physical-memory-management-regressions-with-large-max_phys_bits.patch queue-3.16/sparc64-fix-lockdep-warnings-on-reboot-on-ultra-5.patch queue-3.16/sparc64-switch-to-4-level-page-tables.patch queue-3.16/sparc64-sun4v-tlb-error-power-off-events.patch queue-3.16/sparc-bpf_jit-fix-support-for-ldx-stx-mem-and-skf_ad_vlan_tag.patch queue-3.16/sparc64-increase-size-of-boot-string-to-1024-bytes.patch queue-3.16/sparc64-find_node-adjustment.patch queue-3.16/sparc64-fix-reversed-start-end-in-flush_tlb_kernel_range.patch queue-3.16/sparc64-increase-max_phys_address_bits-to-53.patch queue-3.16/sparc64-define-va-hole-at-run-time-rather-than-at-compile-time.patch queue-3.16/sparc64-fix-register-corruption-in-top-most-kernel-stack-frame-during-boot.patch queue-3.16/sparc64-do-not-disable-interrupts-in-nmi_cpu_busy.patch queue-3.16/sparc64-support-m6-and-m7-for-building-cpu-distribution-map.patch queue-3.16/sparc64-cpu-hardware-caps-support-for-sparc-m6-and-m7.patch queue-3.16/sparc64-do-not-define-thread-fpregs-save-area-as-zero-length-array.patch queue-3.16/sparc-bpf_jit-fix-loads-from-negative-offsets.patch queue-3.16/sparc64-t5-pmu.patch queue-3.16/sparc64-adjust-ktsb-assembler-to-support-larger-physical-addresses.patch queue-3.16/sparc64-implement-__get_user_pages_fast.patch queue-3.16/sparc64-fix-corrupted-thread-fault-code.patch queue-3.16/sparc64-fix-hibernation-code-refrence-to-page_offset.patch queue-3.16/sparc64-correctly-recognise-m6-and-m7-cpu-type.patch queue-3.16/sparc64-fix-pcr_ops-initialization-and-usage-bugs.patch -- To unsubscribe from this list: send the line "unsubscribe stable" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html