On Fri, Nov 22, 2019 at 12:46:03AM -0500, Sasha Levin wrote: > From: Will Deacon <will.deacon@xxxxxxx> > > [ Upstream commit 7faa313f05cad184e8b17750f0cbe5216ac6debb ] > > Commit 396244692232 ("arm64: preempt: Provide our own implementation of > asm/preempt.h") extended the preempt count field in struct thread_info > to 64 bits, so that it consists of a 32-bit count plus a 32-bit flag > indicating whether or not the current task needs rescheduling. > > Whilst the asm-offsets definition of TSK_TI_PREEMPT was updated to point > to this new field, the assembly usage was left untouched meaning that a > 32-bit load from TSK_TI_PREEMPT on a big-endian machine actually returns > the reschedule flag instead of the count. > > Whilst we could fix this by pointing TSK_TI_PREEMPT at the count field, > we're actually better off reworking the two assembly users so that they > operate on the whole 64-bit value in favour of inspecting the thread > flags separately in order to determine whether a reschedule is needed. > > Acked-by: Ard Biesheuvel <ard.biesheuvel@xxxxxxxxxx> > Reported-by: "kernelci.org bot" <bot@xxxxxxxxxxxx> > Tested-by: Kevin Hilman <khilman@xxxxxxxxxxxx> > Signed-off-by: Will Deacon <will.deacon@xxxxxxx> > Signed-off-by: Sasha Levin <sashal@xxxxxxxxxx> > --- > arch/arm64/include/asm/assembler.h | 8 +++----- > arch/arm64/kernel/entry.S | 6 ++---- > 2 files changed, 5 insertions(+), 9 deletions(-) > > diff --git a/arch/arm64/include/asm/assembler.h b/arch/arm64/include/asm/assembler.h > index 5a97ac8531682..0c100506a29aa 100644 > --- a/arch/arm64/include/asm/assembler.h > +++ b/arch/arm64/include/asm/assembler.h > @@ -683,11 +683,9 @@ USER(\label, ic ivau, \tmp2) // invalidate I line PoU > .macro if_will_cond_yield_neon > #ifdef CONFIG_PREEMPT > get_thread_info x0 > - ldr w1, [x0, #TSK_TI_PREEMPT] > - ldr x0, [x0, #TSK_TI_FLAGS] > - cmp w1, #PREEMPT_DISABLE_OFFSET > - csel x0, x0, xzr, eq > - tbnz x0, #TIF_NEED_RESCHED, .Lyield_\@ // needs rescheduling? > + ldr x0, [x0, #TSK_TI_PREEMPT] > + sub x0, x0, #PREEMPT_DISABLE_OFFSET > + cbz x0, .Lyield_\@ > /* fall through to endif_yield_neon */ > .subsection 1 > .Lyield_\@ : > diff --git a/arch/arm64/kernel/entry.S b/arch/arm64/kernel/entry.S > index 5f800384cb9a8..bb68323530458 100644 > --- a/arch/arm64/kernel/entry.S > +++ b/arch/arm64/kernel/entry.S > @@ -622,10 +622,8 @@ el1_irq: > irq_handler > > #ifdef CONFIG_PREEMPT > - ldr w24, [tsk, #TSK_TI_PREEMPT] // get preempt count > - cbnz w24, 1f // preempt count != 0 > - ldr x0, [tsk, #TSK_TI_FLAGS] // get flags > - tbz x0, #TIF_NEED_RESCHED, 1f // needs rescheduling? > + ldr x24, [tsk, #TSK_TI_PREEMPT] // get preempt count > + cbnz x24, 1f // preempt count != 0 > bl el1_preempt While updating 4.19-rt, I stumbled on this change to arm64 backport. And was confused by it, but looking deeper, this is something that breaks without having 396244692232f ("arm64: preempt: Provide our own implementation of asm/preempt.h"). That commit inverts the TIF_NEED_RESCHED meaning where set means we don't need to resched, and clear means we need to resched. This way we can combine the preempt count with the need resched flag test as they share the same 64bit word. A 0 means we need to preempt (as NEED_RESCHED being zero means we need to resched, and this also means preempt_count is zero). If the TIF_NEED_RESCHED bit is set, that means we don't need to resched, and if preempt count is something other than zero, we don't need to resched, and since those two are together by commit 396244692232f, we can just test #TSK_TI_PREEMPT. But because that commit does not exist in 4.19, we can't remove the TIF_NEED_RESCHED check, that this backport does, and then breaks the kernel! -- Steve > 1: > #endif > -- > 2.20.1