Re: [PATCH 1/3] arm64/fpsimd: Ensure SME storage is allocated after SVE VL changes

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



I've confirmed on QEMU and Arm's FVP that this fixes the issue I was seeing.


From: Mark Brown <broonie@xxxxxxxxxx>
Sent: 13 July 2023 21:06
To: Catalin Marinas <Catalin.Marinas@xxxxxxx>; Will Deacon <will@xxxxxxxxxx>; Shuah Khan <shuah@xxxxxxxxxx>
Cc: David Spickett <David.Spickett@xxxxxxx>; linux-arm-kernel@xxxxxxxxxxxxxxxxxxx <linux-arm-kernel@xxxxxxxxxxxxxxxxxxx>; linux-kernel@xxxxxxxxxxxxxxx <linux-kernel@xxxxxxxxxxxxxxx>; linux-kselftest@xxxxxxxxxxxxxxx <linux-kselftest@xxxxxxxxxxxxxxx>; Mark Brown <broonie@xxxxxxxxxx>; stable@xxxxxxxxxxxxxxx <stable@xxxxxxxxxxxxxxx>
Subject: [PATCH 1/3] arm64/fpsimd: Ensure SME storage is allocated after SVE VL changes 
 
When we reconfigure the SVE vector length we discard the backing storage
for the SVE vectors and then reallocate on next SVE use, leaving the SME
specific state alone. This means that we do not enable SME traps if they
were already disabled. That means that userspace code can enter streaming
mode without trapping, putting the task in a state where if we try to save
the state of the task we will fault.

Since the ABI does not specify that changing the SVE vector length disturbs
SME state, and since SVE code may not be aware of SME code in the process,
we shouldn't simply discard any ZA state. Instead immediately reallocate
the storage for SVE if SME is active, and disable SME if we change the SVE
vector length while there is no SME state active.

Disabling SME traps on SVE vector length changes would make the overall
code more complex since we would have a state where we have valid SME state
stored but might get a SME trap.

Fixes: 9e4ab6c89109 ("arm64/sme: Implement vector length configuration prctl()s")
Reported-by: David Spickett <David.Spickett@xxxxxxx>
Signed-off-by: Mark Brown <broonie@xxxxxxxxxx>
Cc: stable@xxxxxxxxxxxxxxx
---
 arch/arm64/kernel/fpsimd.c | 32 +++++++++++++++++++++++++-------
 1 file changed, 25 insertions(+), 7 deletions(-)

diff --git a/arch/arm64/kernel/fpsimd.c b/arch/arm64/kernel/fpsimd.c
index 7a1aeb95d7c3..a527b95c06e7 100644
--- a/arch/arm64/kernel/fpsimd.c
+++ b/arch/arm64/kernel/fpsimd.c
@@ -847,6 +847,9 @@ void sve_sync_from_fpsimd_zeropad(struct task_struct *task)
 int vec_set_vector_length(struct task_struct *task, enum vec_type type,
                           unsigned long vl, unsigned long flags)
 {
+       bool free_sme = false;
+       bool alloc_sve = true;
+
         if (flags & ~(unsigned long)(PR_SVE_VL_INHERIT |
                                      PR_SVE_SET_VL_ONEXEC))
                 return -EINVAL;
@@ -897,22 +900,37 @@ int vec_set_vector_length(struct task_struct *task, enum vec_type type,
                 task->thread.fp_type = FP_STATE_FPSIMD;
         }
 
-       if (system_supports_sme() && type == ARM64_VEC_SME) {
-               task->thread.svcr &= ~(SVCR_SM_MASK |
-                                      SVCR_ZA_MASK);
-               clear_thread_flag(TIF_SME);
+       if (system_supports_sme()) {
+               if (type == ARM64_VEC_SME ||
+                   !(task->thread.svcr & (SVCR_SM_MASK | SVCR_ZA_MASK))) {
+                       /*
+                        * We are changing the SME VL or weren't using
+                        * SME anyway, discard the state and force a
+                        * reallocation.
+                        */
+                       task->thread.svcr &= ~(SVCR_SM_MASK |
+                                              SVCR_ZA_MASK);
+                       clear_thread_flag(TIF_SME);
+                       free_sme = true;
+               } else  {
+                       alloc_sve = true;
+               }
         }
 
         if (task == current)
                 put_cpu_fpsimd_context();
 
         /*
-        * Force reallocation of task SVE and SME state to the correct
-        * size on next use:
+        * Free the changed states if they are not in use, they will
+        * be reallocated to the correct size on next use.  If we need
+        * SVE state due to having untouched SME state then reallocate
+        * it immediately.
          */
         sve_free(task);
-       if (system_supports_sme() && type == ARM64_VEC_SME)
+       if (free_sme)
                 sme_free(task);
+       if (alloc_sve)
+               sve_alloc(task, true);
 
         task_set_vl(task, type, vl);
 

-- 
2.30.2




[Index of Archives]     [Linux Wireless]     [Linux Kernel]     [ATH6KL]     [Linux Bluetooth]     [Linux Netdev]     [Kernel Newbies]     [Share Photos]     [IDE]     [Security]     [Git]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Linux ATA RAID]     [Samba]     [Device Mapper]

  Powered by Linux