5.15-stable review patch. If anyone has any objections, please let me know. ------------------ From: Andrei Vagin <avagin@xxxxxxxxxx> commit d877550eaf2dc9090d782864c96939397a3c6835 upstream. Before this change, the expected size of the user space buffer was taken from fx_sw->xstate_size. fx_sw->xstate_size can be changed from user-space, so it is possible construct a sigreturn frame where: * fx_sw->xstate_size is smaller than the size required by valid bits in fx_sw->xfeatures. * user-space unmaps parts of the sigrame fpu buffer so that not all of the buffer required by xrstor is accessible. In this case, xrstor tries to restore and accesses the unmapped area which results in a fault. But fault_in_readable succeeds because buf + fx_sw->xstate_size is within the still mapped area, so it goes back and tries xrstor again. It will spin in this loop forever. Instead, fault in the maximum size which can be touched by XRSTOR (taken from fpstate->user_size). [ dhansen: tweak subject / changelog ] Fixes: fcb3635f5018 ("x86/fpu/signal: Handle #PF in the direct restore path") Reported-by: Konstantin Bogomolov <bogomolov@xxxxxxxxxx> Suggested-by: Thomas Gleixner <tglx@xxxxxxxxxxxxx> Signed-off-by: Andrei Vagin <avagin@xxxxxxxxxx> Signed-off-by: Dave Hansen <dave.hansen@xxxxxxxxxxxxxxx> Cc:stable@xxxxxxxxxxxxxxx Link: https://lore.kernel.org/all/20240130063603.3392627-1-avagin%40google.com Link: https://lore.kernel.org/all/20240130063603.3392627-1-avagin%40google.com Signed-off-by: Thomas Gleixner <tglx@xxxxxxxxxxxxx> Signed-off-by: Greg Kroah-Hartman <gregkh@xxxxxxxxxxxxxxxxxxx> --- arch/x86/kernel/fpu/signal.c | 12 +++++------- 1 file changed, 5 insertions(+), 7 deletions(-) --- a/arch/x86/kernel/fpu/signal.c +++ b/arch/x86/kernel/fpu/signal.c @@ -246,12 +246,13 @@ static int __restore_fpregs_from_user(vo * Attempt to restore the FPU registers directly from user memory. * Pagefaults are handled and any errors returned are fatal. */ -static int restore_fpregs_from_user(void __user *buf, u64 xrestore, - bool fx_only, unsigned int size) +static int restore_fpregs_from_user(void __user *buf, u64 xrestore, bool fx_only) { struct fpu *fpu = ¤t->thread.fpu; int ret; + /* Restore enabled features only. */ + xrestore &= xfeatures_mask_all & XFEATURE_MASK_USER_SUPPORTED; retry: fpregs_lock(); pagefault_disable(); @@ -278,7 +279,7 @@ retry: if (ret != -EFAULT) return -EINVAL; - if (!fault_in_readable(buf, size)) + if (!fault_in_readable(buf, fpu_user_xstate_size)) goto retry; return -EFAULT; } @@ -303,7 +304,6 @@ retry: static int __fpu_restore_sig(void __user *buf, void __user *buf_fx, bool ia32_fxstate) { - int state_size = fpu_kernel_xstate_size; struct task_struct *tsk = current; struct fpu *fpu = &tsk->thread.fpu; struct user_i387_ia32_struct env; @@ -319,7 +319,6 @@ static int __fpu_restore_sig(void __user return ret; fx_only = !fx_sw_user.magic1; - state_size = fx_sw_user.xstate_size; user_xfeatures = fx_sw_user.xfeatures; } else { user_xfeatures = XFEATURE_MASK_FPSSE; @@ -332,8 +331,7 @@ static int __fpu_restore_sig(void __user * faults. If it does, fall back to the slow path below, going * through the kernel buffer with the enabled pagefault handler. */ - return restore_fpregs_from_user(buf_fx, user_xfeatures, fx_only, - state_size); + return restore_fpregs_from_user(buf_fx, user_xfeatures, fx_only); } /*