In the Book3s HV code, kvmppc_run_core() has logic to grab the secondary threads of the physical core. If for some reason a thread is stuck, kvmppc_grab_hwthread() can fail, but currently we ignore the failure and continue into the guest. If the stuck thread is in the kernel badness ensues. Instead we should check for failure and bail out. I've moved the grabbing prior to the startup of runnable threads, to simplify the error case. AFAICS this is harmless, but I could be missing something subtle. Signed-off-by: Michael Ellerman <michael@xxxxxxxxxxxxxx> --- Or we could just BUG_ON() ? --- arch/powerpc/kvm/book3s_hv.c | 22 ++++++++++++++++++---- 1 file changed, 18 insertions(+), 4 deletions(-) diff --git a/arch/powerpc/kvm/book3s_hv.c b/arch/powerpc/kvm/book3s_hv.c index 721d460..55925cd 100644 --- a/arch/powerpc/kvm/book3s_hv.c +++ b/arch/powerpc/kvm/book3s_hv.c @@ -884,16 +884,30 @@ static int kvmppc_run_core(struct kvmppc_vcore *vc) if (vcpu->arch.ceded) vcpu->arch.ptid = ptid++; + /* + * Grab any remaining hw threads so they can't go into the kernel. + * Do this early to simplify the cleanup path if it fails. + */ + for (i = ptid; i < threads_per_core; ++i) { + int j, rc = kvmppc_grab_hwthread(vc->pcpu + i); + if (rc) { + for (j = i - 1; j ; j--) + kvmppc_release_hwthread(vc->pcpu + j); + + list_for_each_entry(vcpu, &vc->runnable_threads, + arch.run_list) + vcpu->arch.ret = -EBUSY; + + goto out; + } + } + vc->stolen_tb += mftb() - vc->preempt_tb; vc->pcpu = smp_processor_id(); list_for_each_entry(vcpu, &vc->runnable_threads, arch.run_list) { kvmppc_start_thread(vcpu); kvmppc_create_dtl_entry(vcpu, vc); } - /* Grab any remaining hw threads so they can't go into the kernel */ - for (i = ptid; i < threads_per_core; ++i) - kvmppc_grab_hwthread(vc->pcpu + i); - preempt_disable(); spin_unlock(&vc->lock); -- 1.7.9.5 -- To unsubscribe from this list: send the line "unsubscribe kvm-ppc" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html