On Wed, Sep 05, 2018 at 02:23:46PM +0200, Thomas Gleixner wrote: > On Wed, 5 Sep 2018, Thomas Gleixner wrote: > > On Tue, 4 Sep 2018, Neeraj Upadhyay wrote: > > > ret = cpuhp_down_callbacks(cpu, st, target); > > > if (ret && st->state > CPUHP_TEARDOWN_CPU && st->state < prev_state) { > > > - cpuhp_reset_state(st, prev_state); > > > + /* > > > + * As st->last is not set, cpuhp_reset_state() increments > > > + * st->state, which results in CPUHP_AP_SMPBOOT_THREADS being > > > + * skipped during rollback. So, don't use it here. > > > + */ > > > + st->rollback = true; > > > + st->target = prev_state; > > > + st->bringup = !st->bringup; > > > > No, this is just papering over the actual problem. > > > > The state inconsistency happens in take_cpu_down() when it returns with a > > failure from __cpu_disable() because that returns with state = TEARDOWN_CPU > > and st->state is then incremented in undo_cpu_down(). > > > > That's the real issue and we need to analyze the whole cpu_down rollback > > logic first. > > And looking closer this is a general issue. Just that the TEARDOWN state > makes it simple to observe. It's universaly broken, when the first teardown > callback fails because, st->state is only decremented _AFTER_ the callback > returns success, but undo_cpu_down() increments unconditionally. > > Patch below. This patch fixes the issue reported @[1]. Lorenzo did some debugging and I wanted to have a look at it at some point but this discussion drew my attention and sounded very similar[2]. So I did a quick test with this patch and it fixes the issue. -- Regards, Sudeep [1] https://lore.kernel.org/lkml/CAMuHMdVg868LgL5xTg5Dp5rReKxoo+8fRy+ETJiMxGWZCp+hWw@xxxxxxxxxxxxxx/ [2] https://lore.kernel.org/lkml/20180823131505.GA31558@red-moon/