Hi Marc, > > +static DEFINE_PER_CPU(hyp_spinlock_t, psci_cpu_lock); > > DEFINE_PER_CPU(enum kvm_nvhe_psci_state, psci_cpu_state); > > > > static u64 get_psci_func_id(struct kvm_cpu_context *host_ctxt) > > @@ -76,9 +79,32 @@ static __noreturn unsigned long > > psci_forward_noreturn(struct kvm_cpu_context *ho > > hyp_panic(); /* unreachable */ > > } > > > > +static int psci_cpu_off(u64 func_id, struct kvm_cpu_context *host_ctxt) > > +{ > > + hyp_spinlock_t *cpu_lock = this_cpu_ptr(&psci_cpu_lock); > > + enum kvm_nvhe_psci_state *cpu_power = this_cpu_ptr(&psci_cpu_state); > > + u32 power_state = (u32)host_ctxt->regs.regs[1]; > > + int ret; > > + > > + /* Change the recorded state to OFF before forwarding the call. */ > > + hyp_spin_lock(cpu_lock); > > + *cpu_power = KVM_NVHE_PSCI_CPU_OFF; > > + hyp_spin_unlock(cpu_lock); > > So at this point, another CPU can observe the vcpu being "off", and issue > a PSCI_ON, which may result in an "already on". I'm not sure this is an > actual issue, but it is worth documenting. > > What is definitely missing is a rational about *why* we need to track the > state of the vcpus. I naively imagined that we could directly proxy the > PSCI calls to EL3, only repainting PC for PSCI_ON. I think I've solved that particular problem by *not* using cpu_power for AFFINITY_INFO. It's used only for resolving the race between CPU_ON/OFF. You are, however, right that perhaps that is not needed either and resolving the race should be left to the host. In that case the hypervisor would be just repainting the CPU_ON/SUSPEND args, as you said. _______________________________________________ kvmarm mailing list kvmarm@xxxxxxxxxxxxxxxxxxxxx https://lists.cs.columbia.edu/mailman/listinfo/kvmarm