Roman Kagan <rkagan@xxxxxxxxxxxxx> writes: > On Fri, Jun 29, 2018 at 03:10:14PM +0200, Vitaly Kuznetsov wrote: >> Roman Kagan <rkagan@xxxxxxxxxxxxx> writes: >> >> > On Fri, Jun 29, 2018 at 01:37:44PM +0200, Vitaly Kuznetsov wrote: >> >> The problem we're trying to solve here is: with PV TLB flush and IPI we >> >> need to walk through the supplied list of VP_INDEXes and get VCPU >> >> ids. Usually they match. But in case they don't [...] >> > >> > Why wouldn't they *in practice*? Only if the userspace wanted to be >> > funny and assigned VP_INDEXes randomly? I'm not sure we need to >> > optimize for this case. >> >> Can someone please remind me why we allow userspace to change it in the >> first place? > > I can ;) > > We used not to, and reported KVM's vcpu index as the VP_INDEX. However, > later we realized that VP_INDEX needed to be persistent across > migrations and otherwise also known to userspace. Relying on the kernel > to always initialize its indices in the same order was unacceptable, and > we came up with no better way of synchronizing VP_INDEX between the > userspace and the kernel than to let the former to set it explicitly. > > However, this is basically a future-proofing feature; in practice, both > QEMU and KVM initialize their indices in the same order. Thanks! But in the theoretical case when these indices start to differ after migration, users will notice a slowdown which will be hard to explain, right? Does it justify the need for vp_idx_to_vcpu_idx? In any case I sent v3 with vp_idx_to_vcpu_idx dropped for now, hope Radim is OK with us de-coupling these discussions. -- Vitaly