Commit 18011eac28c7 ("arm64: tls: Avoid unconditional zeroing of tpidrro_el0 for native tasks") tried to optimise the context switching of tpidrro_el0 by eliding the clearing of the register when switching to a native task with kpti enabled, on the erroneous assumption that the kpti trampoline entry code would already have taken care of the write. Although the kpti trampoline does zero the register on entry from a native task, the check in tls_thread_switch() is on the *next* task and so we can end up leaving a stale, non-zero value in the register if the previous task was 32-bit. Drop the broken optimisation and zero tpidrro_el0 unconditionally when switching to a native 64-bit task. Cc: Mark Rutland <mark.rutland@xxxxxxx> Cc: <stable@xxxxxxxxxxxxxxx> Fixes: 18011eac28c7 ("arm64: tls: Avoid unconditional zeroing of tpidrro_el0 for native tasks") Signed-off-by: Will Deacon <will@xxxxxxxxxx> --- You fix one side-channel and introduce another... :( arch/arm64/kernel/process.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/arch/arm64/kernel/process.c b/arch/arm64/kernel/process.c index 3e7c8c8195c3..2bbcbb11d844 100644 --- a/arch/arm64/kernel/process.c +++ b/arch/arm64/kernel/process.c @@ -442,7 +442,7 @@ static void tls_thread_switch(struct task_struct *next) if (is_compat_thread(task_thread_info(next))) write_sysreg(next->thread.uw.tp_value, tpidrro_el0); - else if (!arm64_kernel_unmapped_at_el0()) + else write_sysreg(0, tpidrro_el0); write_sysreg(*task_user_tls(next), tpidr_el0); -- 2.47.0.277.g8800431eea-goog