On Sat, 2013-08-24 at 03:45 +0800, Colin Cross wrote: > Joseph Lo <josephl@xxxxxxxxxx> reported a lockup on Tegra3 caused > by a race condition in coupled cpuidle. When two or more cpus Actually this issue can be reproduced on both Tegra20/30 platforms. And I suggest using Tegra20 to replace Tegra3 here, we only apply coupled CPU idle function on Tegra20 in the mainline right now. > enter idle at the same time, the first cpus to arrive may go to the > ready loop without processing pending pokes from the last cpu to > arrive. > > This patch adds a check for pending pokes once all cpus have been > synchronized in the ready loop and resets the coupled state and > retries if any cpus failed to handle their pending poke. > > Retrying on all cpus may trigger the same issue again, so this patch > also adds a check to ensure that each cpu has received at least one > poke between when it enters the waiting loop and when it moves on to > the ready loop. > > Reported-by: Joseph Lo <josephl@xxxxxxxxxx> > CC: stable@xxxxxxxxxxxxxxx > Signed-off-by: Colin Cross <ccross@xxxxxxxxxxx> > --- > drivers/cpuidle/coupled.c | 107 +++++++++++++++++++++++++++++++++++----------- > 1 file changed, 82 insertions(+), 25 deletions(-) > [snip] > +/* > + * The cpuidle_coupled_poke_pending mask is used to ensure that each cpu has s/cpuidle_coupled_poke_pending/cpuidle_coupled_poked/? :) > + * been poked once to minimize entering the ready loop with a poke pending, > + * which would require aborting and retrying. > + */ > +static cpumask_t cpuidle_coupled_poked; > I fixed this issue by checking if there is a pending SGI, then abort the coupled state on Tegra20. It still can be reproduced easily if I remove the checking code. So I tested the case with this patch, the result is good. This patch can fix the issue indeed. I also tested with the other two patches. It didn't cause any regression. So this series: Tested-by: Joseph Lo <josephl@xxxxxxxxxx> -- To unsubscribe from this list: send the line "unsubscribe linux-tegra" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html