VCPU with vcpu->vcpu_id has highest probability of being stored in kvm->vcpus[vcpu->vcpu_id]. Other common case, sparse sequential vcpu_id, is more likely to find a match downwards from vcpu->vcpu_id. Random distribution does not matter so we first search slots [vcpu->vcpu_id..0] and then slots (vcpu->vcpu_id..kvm->online_vcpus). If we value cycles over memory, a direct map between vcpu_id and vcpu->vcpu_id would be better. (Like kvm_for_each_vcpu, the code avoid the kvm->lock by presuming that kvm->online_vcpus doesn't shrink and that the vcpu pointer is set up before incrementing. kvm_free_vcpus() breaks that presumption, but vm is destroyed only after the fd has been released.) Signed-off-by: Radim Krčmář <rkrcmar@xxxxxxxxxx> --- virt/kvm/kvm_main.c | 8 ++++++-- 1 file changed, 6 insertions(+), 2 deletions(-) diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c index 024428b64812..7d532591d5af 100644 --- a/virt/kvm/kvm_main.c +++ b/virt/kvm/kvm_main.c @@ -2650,7 +2650,7 @@ int kvm_vm_ioctl_user_exit(struct kvm *kvm, struct kvm_user_exit *info) * KVM_CREATE_VCPU, where we cast from unsigned long. */ int vcpu_id = info->vcpu_id; - int idx; + int idx, first; struct kvm_vcpu *vcpu; const struct kvm_user_exit valid = {.vcpu_id = info->vcpu_id}; @@ -2659,7 +2659,11 @@ int kvm_vm_ioctl_user_exit(struct kvm *kvm, struct kvm_user_exit *info) if (memcmp(info, &valid, sizeof(valid))) return -EINVAL; - kvm_for_each_vcpu(idx, vcpu, kvm) + for (idx = first = min(vcpu_id, atomic_read(&kvm->online_vcpus) - 1); + idx >= 0 ? (vcpu = kvm_get_vcpu(kvm, idx)) != NULL + : ++first < atomic_read(&kvm->online_vcpus) && + (vcpu = kvm_get_vcpu(kvm, first)) != NULL; + idx--) if (vcpu->vcpu_id == vcpu_id) { kvm_make_request(KVM_REQ_EXIT, vcpu); kvm_vcpu_kick(vcpu); -- 2.5.0 -- To unsubscribe from this list: send the line "unsubscribe kvm" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html