The sequence fpu->initialized = 1; preempt_disable(); fpu__restore(fpu); preempt_enable(); is racy in regard to a context switch. A context switch after the first line would save the `actual' content of the FPU registers and trash away the state that has been prepared (since fpu__drop()). Use local_bh_disable() around the restore sequence to avoid the race. BH needs to be disabled because BH is allowed to run (even with preemption disabled) and might invoke kernel_fpu_begin(). This possible race has been reported by the Kernel Test Robot in FEB 2016 while there still was lazy FPU support. Link: https://lkml.kernel.org/r/20160226074940.GA28911@xxxxxxx Cc: stable@xxxxxxxxxxxxxxx Signed-off-by: Sebastian Andrzej Siewior <bigeasy@xxxxxxxxxxxxx> --- arch/x86/kernel/fpu/signal.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/arch/x86/kernel/fpu/signal.c b/arch/x86/kernel/fpu/signal.c index 61a949d84dfa5..d99a8ee9e185e 100644 --- a/arch/x86/kernel/fpu/signal.c +++ b/arch/x86/kernel/fpu/signal.c @@ -344,10 +344,10 @@ static int __fpu__restore_sig(void __user *buf, void __user *buf_fx, int size) sanitize_restored_xstate(tsk, &env, xfeatures, fx_only); } + local_bh_disable(); fpu->initialized = 1; - preempt_disable(); fpu__restore(fpu); - preempt_enable(); + local_bh_enable(); return err; } else { -- 2.19.1