On Wed, 31 Oct 2007 16:39:00 +0000, Ralf Baechle <ralf@xxxxxxxxxxxxxx> wrote: > > > The only safe but ugly workaround is to change the return from exception > > > code to detect if the EPC is in the range startin from the condition > > > check in the idle loop to including the WAIT instruction and if so to > > > patch the EPC to resume execution at the condition check or the > > > instruction following the WAIT. > > > > I'm also thinking of this approach. Still wondering if it is worth to > > implement. > > The tickless kernel is very interesting for the low power fraction. And > it's especially those users who would suffer most the loss of the ability > to use the WAIT instruction. For a system running from two AAA cells the > tradeoff is clear ... So I think it's become a must. Then, something like this? Selecting in build-time is not so good, but there are some CPUs which do not need this hack at all. Synthesizing the ret_from_irq() at runtime might satisfy everyone? diff --git a/arch/mips/kernel/cpu-probe.c b/arch/mips/kernel/cpu-probe.c index c8c47a2..621130c 100644 --- a/arch/mips/kernel/cpu-probe.c +++ b/arch/mips/kernel/cpu-probe.c @@ -51,12 +51,17 @@ static void r39xx_wait(void) * But it is implementation-dependent wheter the pipelie restarts when * a non-enabled interrupt is requested. */ +#ifdef CONFIG_ROLLBACK_CPU_WAIT +extern void cpu_wait_rollback(void); +#define r4k_wait cpu_wait_rollback +#else static void r4k_wait(void) { __asm__(" .set mips3 \n" " wait \n" " .set mips0 \n"); } +#endif /* * This variant is preferable as it allows testing need_resched and going to diff --git a/arch/mips/kernel/entry.S b/arch/mips/kernel/entry.S index e29598a..ffa043c 100644 --- a/arch/mips/kernel/entry.S +++ b/arch/mips/kernel/entry.S @@ -27,6 +27,20 @@ #endif .text +#ifdef CONFIG_ROLLBACK_CPU_WAIT + .align 6 +FEXPORT(cpu_wait_rollback) + LONG_L t0, TI_FLAGS($28) + andi t0, _TIF_NEED_RESCHED + bnez t0, 1f + .set mips3 + wait + .set mips0 +1: + jr ra + .align 6 +cpu_wait_rollback_end: +#endif .align 5 #ifndef CONFIG_PREEMPT FEXPORT(ret_from_exception) @@ -35,6 +49,14 @@ FEXPORT(ret_from_exception) #endif FEXPORT(ret_from_irq) LONG_S s0, TI_REGS($28) +#ifdef CONFIG_ROLLBACK_CPU_WAIT + LONG_L t0, PT_EPC(sp) + ori t0, 0x3f + xori t0, 0x3f + PTR_LA t1, cpu_wait_rollback + bne t0, t1, __ret_from_irq + LONG_S t0, PT_EPC(sp) # return to cpu_wait_rollback +#endif FEXPORT(__ret_from_irq) LONG_L t0, PT_STATUS(sp) # returning to kernel mode? andi t0, t0, KU_USER