This is a note to let you know that I've just added the patch titled KVM: SVM: Set target pCPU during IRTE update if target vCPU is running to the 6.5-stable tree which can be found at: http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=summary The filename of the patch is: kvm-svm-set-target-pcpu-during-irte-update-if-target-vcpu-is-running.patch and it can be found in the queue-6.5 subdirectory. If you, or anyone else, feels it should not be added to the stable tree, please let <stable@xxxxxxxxxxxxxxx> know about it. >From f3cebc75e7425d6949d726bb8e937095b0aef025 Mon Sep 17 00:00:00 2001 From: Sean Christopherson <seanjc@xxxxxxxxxx> Date: Tue, 8 Aug 2023 16:31:32 -0700 Subject: KVM: SVM: Set target pCPU during IRTE update if target vCPU is running From: Sean Christopherson <seanjc@xxxxxxxxxx> commit f3cebc75e7425d6949d726bb8e937095b0aef025 upstream. Update the target pCPU for IOMMU doorbells when updating IRTE routing if KVM is actively running the associated vCPU. KVM currently only updates the pCPU when loading the vCPU (via avic_vcpu_load()), and so doorbell events will be delayed until the vCPU goes through a put+load cycle (which might very well "never" happen for the lifetime of the VM). To avoid inserting a stale pCPU, e.g. due to racing between updating IRTE routing and vCPU load/put, get the pCPU information from the vCPU's Physical APIC ID table entry (a.k.a. avic_physical_id_cache in KVM) and update the IRTE while holding ir_list_lock. Add comments with --verbose enabled to explain exactly what is and isn't protected by ir_list_lock. Fixes: 411b44ba80ab ("svm: Implements update_pi_irte hook to setup posted interrupt") Reported-by: dengqiao.joey <dengqiao.joey@xxxxxxxxxxxxx> Cc: stable@xxxxxxxxxxxxxxx Cc: Alejandro Jimenez <alejandro.j.jimenez@xxxxxxxxxx> Cc: Joao Martins <joao.m.martins@xxxxxxxxxx> Cc: Maxim Levitsky <mlevitsk@xxxxxxxxxx> Cc: Suravee Suthikulpanit <suravee.suthikulpanit@xxxxxxx> Tested-by: Alejandro Jimenez <alejandro.j.jimenez@xxxxxxxxxx> Reviewed-by: Joao Martins <joao.m.martins@xxxxxxxxxx> Link: https://lore.kernel.org/r/20230808233132.2499764-3-seanjc@xxxxxxxxxx Signed-off-by: Sean Christopherson <seanjc@xxxxxxxxxx> Signed-off-by: Greg Kroah-Hartman <gregkh@xxxxxxxxxxxxxxxxxxx> --- arch/x86/kvm/svm/avic.c | 28 ++++++++++++++++++++++++++++ 1 file changed, 28 insertions(+) --- a/arch/x86/kvm/svm/avic.c +++ b/arch/x86/kvm/svm/avic.c @@ -791,6 +791,7 @@ static int svm_ir_list_add(struct vcpu_s int ret = 0; unsigned long flags; struct amd_svm_iommu_ir *ir; + u64 entry; /** * In some cases, the existing irte is updated and re-set, @@ -824,6 +825,18 @@ static int svm_ir_list_add(struct vcpu_s ir->data = pi->ir_data; spin_lock_irqsave(&svm->ir_list_lock, flags); + + /* + * Update the target pCPU for IOMMU doorbells if the vCPU is running. + * If the vCPU is NOT running, i.e. is blocking or scheduled out, KVM + * will update the pCPU info when the vCPU awkened and/or scheduled in. + * See also avic_vcpu_load(). + */ + entry = READ_ONCE(*(svm->avic_physical_id_cache)); + if (entry & AVIC_PHYSICAL_ID_ENTRY_IS_RUNNING_MASK) + amd_iommu_update_ga(entry & AVIC_PHYSICAL_ID_ENTRY_HOST_PHYSICAL_ID_MASK, + true, pi->ir_data); + list_add(&ir->node, &svm->ir_list); spin_unlock_irqrestore(&svm->ir_list_lock, flags); out: @@ -1031,6 +1044,13 @@ void avic_vcpu_load(struct kvm_vcpu *vcp if (kvm_vcpu_is_blocking(vcpu)) return; + /* + * Grab the per-vCPU interrupt remapping lock even if the VM doesn't + * _currently_ have assigned devices, as that can change. Holding + * ir_list_lock ensures that either svm_ir_list_add() will consume + * up-to-date entry information, or that this task will wait until + * svm_ir_list_add() completes to set the new target pCPU. + */ spin_lock_irqsave(&svm->ir_list_lock, flags); entry = READ_ONCE(*(svm->avic_physical_id_cache)); @@ -1067,6 +1087,14 @@ void avic_vcpu_put(struct kvm_vcpu *vcpu if (!(entry & AVIC_PHYSICAL_ID_ENTRY_IS_RUNNING_MASK)) return; + /* + * Take and hold the per-vCPU interrupt remapping lock while updating + * the Physical ID entry even though the lock doesn't protect against + * multiple writers (see above). Holding ir_list_lock ensures that + * either svm_ir_list_add() will consume up-to-date entry information, + * or that this task will wait until svm_ir_list_add() completes to + * mark the vCPU as not running. + */ spin_lock_irqsave(&svm->ir_list_lock, flags); avic_update_iommu_vcpu_affinity(vcpu, -1, 0); Patches currently in stable-queue which might be from seanjc@xxxxxxxxxx are queue-6.5/perf-header-fix-missing-pmu-caps.patch queue-6.5/kvm-svm-don-t-inject-ud-if-kvm-attempts-to-skip-sev-guest-insn.patch queue-6.5/kvm-nsvm-check-instead-of-asserting-on-nested-tsc-scaling-support.patch queue-6.5/drm-i915-gvt-verify-pfn-is-valid-before-dereferencin.patch queue-6.5/kvm-nsvm-load-l1-s-tsc-multiplier-based-on-l1-state-not-l2-state.patch queue-6.5/x86-virt-drop-unnecessary-check-on-extended-cpuid-le.patch queue-6.5/kvm-svm-get-source-vcpus-from-source-vm-for-sev-es-intrahost-migration.patch queue-6.5/kvm-svm-skip-vmsa-init-in-sev_es_init_vmcb-if-pointer-is-null.patch queue-6.5/kvm-svm-take-and-hold-ir_list_lock-when-updating-vcpu-s-physical-id-entry.patch queue-6.5/kvm-svm-set-target-pcpu-during-irte-update-if-target-vcpu-is-running.patch queue-6.5/perf-lock-don-t-pass-an-err_ptr-directly-to-perf_ses.patch queue-6.5/drm-i915-gvt-put-the-page-reference-obtained-by-kvm-.patch queue-6.5/drm-i915-gvt-drop-unused-helper-intel_vgpu_reset_gtt.patch queue-6.5/kvm-vmx-refresh-available-regs-and-idt-vectoring-info-before-nmi-handling.patch queue-6.5/kvm-svm-don-t-defer-nmi-unblocking-until-next-exit-f.patch