Roman Kagan <rkagan@xxxxxxxxxxxxx> writes: > On Fri, Jun 29, 2018 at 12:26:23PM +0200, Vitaly Kuznetsov wrote: >> Roman Kagan <rkagan@xxxxxxxxxxxxx> writes: >> >> > On Thu, Jun 28, 2018 at 03:53:10PM +0200, Vitaly Kuznetsov wrote: >> >> While it is easy to get VP index from vCPU index the reverse task is hard. >> >> Basically, to solve it we have to walk all vCPUs checking if their VP index >> >> matches. For hypercalls like HvFlushVirtualAddress{List,Space}* and the >> >> upcoming HvSendSyntheticClusterIpi* where a single CPU may be specified in >> >> the whole set this is obviously sub-optimal. >> >> >> >> As VP index can be set to anything <= U32_MAX by userspace using plain >> >> [0..MAX_VP_INDEX] array is not a viable option. Use condensed sorted >> >> array with logarithmic search complexity instead. Use RCU to make read >> >> access as fast as possible and maintain atomicity of updates. >> > >> > Quoting TLFS 5.0C section 7.8.1: >> > >> >> Virtual processors are identified by using an index (VP index). The >> >> maximum number of virtual processors per partition supported by the >> >> current implementation of the hypervisor can be obtained through CPUID >> >> leaf 0x40000005. A virtual processor index must be less than the >> >> maximum number of virtual processors per partition. >> > >> > so this is a dense index, and VP_INDEX >= KVM_MAX_VCPUS is invalid. I >> > think we're better off enforcing this in kvm_hv_set_msr and keep the >> > translation simple. If the algorithm in get_vcpu_by_vpidx is not good >> > enough (and yes it can be made to return NULL early on vpidx >= >> > KVM_MAX_VCPUS instead of taking the slow path) then a simple index array >> > of KVM_MAX_VCPUS entries should certainly do. >> >> Sure, we can use pre-allocated [0..KVM_MAX_VCPUS] array instead and put >> limits on what userspace can assign VP_INDEX to. Howver, while thinking >> about it I decided to go with the more complex condensed array approach >> because the tendency is for KVM_MAX_VCPUS to grow and we will be >> pre-allocating more and more memory for no particular reason (so I think >> even 'struct kvm_vcpu *vcpus[KVM_MAX_VCPUS]' in 'struct kvm' will need >> to be converted to something else eventually). > > We're talking of kilobytes here. I guess this is going to be the least > of the scalability problems. Yes, kilobytes but per-VM. > >> Anyway, I'm flexible and if you think we should go this way now I'll do >> this in v3. We can re-think this when we later decide to raise >> KVM_MAX_VCPUS significantly. > > Although there's no strict requirement for that I think every sensible > userspace will allocate VP_INDEX linearly resulting in it being equal to > KVM's vcpu index. So we've yet to see a case where get_vcpu_by_vpidx > doesn't take the fast path. If it ever starts appearing in the profiles > we may consider optimiziing it but ATM I don't even think introducing > the translation array is justified. It was Radim who suggested it in the first place :-) The problem we're trying to solve here is: with PV TLB flush and IPI we need to walk through the supplied list of VP_INDEXes and get VCPU ids. Usually they match. But in case they don't we'll fall back to full scan for every VP_INDEX in the supplied list. Now let's say we have 128 CPUs. We'll need to perform up to 128 * 128 extra comparisons on every hypercall. Not good. So instead of using get_vcpu_by_vpidx() I opted for walking the whole VCPU list and checking if VPU's VP_INDEX is in the supplied set. This way we end up with 128 comparisons in the example above (worst case scenarion). However, we lose in simple scenarios like only 1 VP_INDEX was specified in the set: we'll still need to walk the whole list. So having the translation array (one way or another) is IMO justified. -- Vitaly