On Thu, May 25, 2023, Michal Luczaj wrote: > Handle the case of vCPU addition and/or APIC enabling during the APIC map > recalculations. Check the sanity of x2APIC ID in !x2apic_format && > apic_x2apic_mode() case. > > kvm_recalculate_apic_map() creates the APIC map iterating over the list of > vCPUs twice. First to find the max APIC ID and allocate a max-sized buffer, > then again, calling kvm_recalculate_phys_map() for each vCPU. This opens a > race window: value of max APIC ID can increase _after_ the buffer was > allocated. > > Signed-off-by: Michal Luczaj <mhal@xxxxxxx> > --- > arch/x86/kvm/lapic.c | 8 ++++++-- > 1 file changed, 6 insertions(+), 2 deletions(-) > > diff --git a/arch/x86/kvm/lapic.c b/arch/x86/kvm/lapic.c > index e542cf285b51..39b9a318d04c 100644 > --- a/arch/x86/kvm/lapic.c > +++ b/arch/x86/kvm/lapic.c > @@ -265,10 +265,14 @@ static int kvm_recalculate_phys_map(struct kvm_apic_map *new, > * mapped, i.e. is aliased to multiple vCPUs. The optimized > * map requires a strict 1:1 mapping between IDs and vCPUs. > */ > - if (apic_x2apic_mode(apic)) > + if (apic_x2apic_mode(apic)) { > + if (x2apic_id > new->max_apic_id) > + return -EINVAL; Hmm, disabling the optimized map just because userspace created a new vCPU is unfortunate and unnecessary. Rather than return -EINVAL and only perform the check when x2APIC is enabled, what if we instead do the check immediately and return -E2BIG? Then the caller can retry with a bigger array size. Preemption is enabled and retries are bounded by the number of possible vCPUs, so I don't see any obvious issues with retrying. And I vote to also add a sanity check on xapic_id, if only to provide documentation as to why it can't overflow. I think hoisting the checks up would also obviate the need for cleanup (patch 2), which I agree isn't obviously better. E.g. this? Compile tested only. I'll test more tomorrow unless you beat me to it. Thanks for the fun bugs, as always :-) --- arch/x86/kvm/lapic.c | 17 ++++++++++++++--- 1 file changed, 14 insertions(+), 3 deletions(-) diff --git a/arch/x86/kvm/lapic.c b/arch/x86/kvm/lapic.c index e542cf285b51..cd34b88c937a 100644 --- a/arch/x86/kvm/lapic.c +++ b/arch/x86/kvm/lapic.c @@ -228,6 +228,12 @@ static int kvm_recalculate_phys_map(struct kvm_apic_map *new, u32 xapic_id = kvm_xapic_id(apic); u32 physical_id; + if (WARN_ON_ONCE(xapic_id >= new->max_apic_id)) + return -EINVAL; + + if (x2apic_id >= new->max_apic_id) + return -E2BIG; + /* * Deliberately truncate the vCPU ID when detecting a mismatched APIC * ID to avoid false positives if the vCPU ID, i.e. x2APIC ID, is a @@ -253,8 +259,7 @@ static int kvm_recalculate_phys_map(struct kvm_apic_map *new, */ if (vcpu->kvm->arch.x2apic_format) { /* See also kvm_apic_match_physical_addr(). */ - if ((apic_x2apic_mode(apic) || x2apic_id > 0xff) && - x2apic_id <= new->max_apic_id) + if (apic_x2apic_mode(apic) || x2apic_id > 0xff) new->phys_map[x2apic_id] = apic; if (!apic_x2apic_mode(apic) && !new->phys_map[xapic_id]) @@ -366,6 +371,7 @@ void kvm_recalculate_apic_map(struct kvm *kvm) unsigned long i; u32 max_id = 255; /* enough space for any xAPIC ID */ bool xapic_id_mismatch = false; + int r; /* Read kvm->arch.apic_map_dirty before kvm->arch.apic_map. */ if (atomic_read_acquire(&kvm->arch.apic_map_dirty) == CLEAN) @@ -386,6 +392,7 @@ void kvm_recalculate_apic_map(struct kvm *kvm) return; } +retry: kvm_for_each_vcpu(i, vcpu, kvm) if (kvm_apic_present(vcpu)) max_id = max(max_id, kvm_x2apic_id(vcpu->arch.apic)); @@ -404,9 +411,13 @@ void kvm_recalculate_apic_map(struct kvm *kvm) if (!kvm_apic_present(vcpu)) continue; - if (kvm_recalculate_phys_map(new, vcpu, &xapic_id_mismatch)) { + r = kvm_recalculate_phys_map(new, vcpu, &xapic_id_mismatch); + if (r) { kvfree(new); new = NULL; + if (r == -E2BIG) + goto retry; + goto out; } base-commit: 39428f6ea9eace95011681628717062ff7f5eb5f --