This is a note to let you know that I've just added the patch titled vfio/pci: Lock external INTx masking ops to the 5.10-stable tree which can be found at: http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=summary The filename of the patch is: vfio-pci-lock-external-intx-masking-ops.patch and it can be found in the queue-5.10 subdirectory. If you, or anyone else, feels it should not be added to the stable tree, please let <stable@xxxxxxxxxxxxxxx> know about it. >From stable+bounces-35123-greg=kroah.com@xxxxxxxxxxxxxxx Mon Apr 1 18:54:00 2024 From: Alex Williamson <alex.williamson@xxxxxxxxxx> Date: Mon, 1 Apr 2024 10:52:56 -0600 Subject: vfio/pci: Lock external INTx masking ops To: stable@xxxxxxxxxxxxxxx Cc: Alex Williamson <alex.williamson@xxxxxxxxxx>, sashal@xxxxxxxxxx, gregkh@xxxxxxxxxxxxxxxxxxx, eric.auger@xxxxxxxxxx, Reinette Chatre <reinette.chatre@xxxxxxxxx>, Kevin Tian <kevin.tian@xxxxxxxxx> Message-ID: <20240401165302.3699643-3-alex.williamson@xxxxxxxxxx> From: Alex Williamson <alex.williamson@xxxxxxxxxx> [ Upstream commit 810cd4bb53456d0503cc4e7934e063835152c1b7 ] Mask operations through config space changes to DisINTx may race INTx configuration changes via ioctl. Create wrappers that add locking for paths outside of the core interrupt code. In particular, irq_type is updated holding igate, therefore testing is_intx() requires holding igate. For example clearing DisINTx from config space can otherwise race changes of the interrupt configuration. This aligns interfaces which may trigger the INTx eventfd into two camps, one side serialized by igate and the other only enabled while INTx is configured. A subsequent patch introduces synchronization for the latter flows. Cc: <stable@xxxxxxxxxxxxxxx> Fixes: 89e1f7d4c66d ("vfio: Add PCI device driver") Reported-by: Reinette Chatre <reinette.chatre@xxxxxxxxx> Reviewed-by: Kevin Tian <kevin.tian@xxxxxxxxx> Reviewed-by: Reinette Chatre <reinette.chatre@xxxxxxxxx> Reviewed-by: Eric Auger <eric.auger@xxxxxxxxxx> Link: https://lore.kernel.org/r/20240308230557.805580-3-alex.williamson@xxxxxxxxxx Signed-off-by: Alex Williamson <alex.williamson@xxxxxxxxxx> Signed-off-by: Greg Kroah-Hartman <gregkh@xxxxxxxxxxxxxxxxxxx> --- drivers/vfio/pci/vfio_pci_intrs.c | 30 ++++++++++++++++++++++++------ 1 file changed, 24 insertions(+), 6 deletions(-) --- a/drivers/vfio/pci/vfio_pci_intrs.c +++ b/drivers/vfio/pci/vfio_pci_intrs.c @@ -33,11 +33,13 @@ static void vfio_send_intx_eventfd(void eventfd_signal(vdev->ctx[0].trigger, 1); } -void vfio_pci_intx_mask(struct vfio_pci_device *vdev) +static void __vfio_pci_intx_mask(struct vfio_pci_device *vdev) { struct pci_dev *pdev = vdev->pdev; unsigned long flags; + lockdep_assert_held(&vdev->igate); + spin_lock_irqsave(&vdev->irqlock, flags); /* @@ -65,6 +67,13 @@ void vfio_pci_intx_mask(struct vfio_pci_ spin_unlock_irqrestore(&vdev->irqlock, flags); } +void vfio_pci_intx_mask(struct vfio_pci_device *vdev) +{ + mutex_lock(&vdev->igate); + __vfio_pci_intx_mask(vdev); + mutex_unlock(&vdev->igate); +} + /* * If this is triggered by an eventfd, we can't call eventfd_signal * or else we'll deadlock on the eventfd wait queue. Return >0 when @@ -107,12 +116,21 @@ static int vfio_pci_intx_unmask_handler( return ret; } -void vfio_pci_intx_unmask(struct vfio_pci_device *vdev) +static void __vfio_pci_intx_unmask(struct vfio_pci_device *vdev) { + lockdep_assert_held(&vdev->igate); + if (vfio_pci_intx_unmask_handler(vdev, NULL) > 0) vfio_send_intx_eventfd(vdev, NULL); } +void vfio_pci_intx_unmask(struct vfio_pci_device *vdev) +{ + mutex_lock(&vdev->igate); + __vfio_pci_intx_unmask(vdev); + mutex_unlock(&vdev->igate); +} + static irqreturn_t vfio_intx_handler(int irq, void *dev_id) { struct vfio_pci_device *vdev = dev_id; @@ -428,11 +446,11 @@ static int vfio_pci_set_intx_unmask(stru return -EINVAL; if (flags & VFIO_IRQ_SET_DATA_NONE) { - vfio_pci_intx_unmask(vdev); + __vfio_pci_intx_unmask(vdev); } else if (flags & VFIO_IRQ_SET_DATA_BOOL) { uint8_t unmask = *(uint8_t *)data; if (unmask) - vfio_pci_intx_unmask(vdev); + __vfio_pci_intx_unmask(vdev); } else if (flags & VFIO_IRQ_SET_DATA_EVENTFD) { int32_t fd = *(int32_t *)data; if (fd >= 0) @@ -455,11 +473,11 @@ static int vfio_pci_set_intx_mask(struct return -EINVAL; if (flags & VFIO_IRQ_SET_DATA_NONE) { - vfio_pci_intx_mask(vdev); + __vfio_pci_intx_mask(vdev); } else if (flags & VFIO_IRQ_SET_DATA_BOOL) { uint8_t mask = *(uint8_t *)data; if (mask) - vfio_pci_intx_mask(vdev); + __vfio_pci_intx_mask(vdev); } else if (flags & VFIO_IRQ_SET_DATA_EVENTFD) { return -ENOTTY; /* XXX implement me */ } Patches currently in stable-queue which might be from kroah.com@xxxxxxxxxxxxxxx are queue-5.10/x86-rfds-mitigate-register-file-data-sampling-rfds.patch queue-5.10/vfio-pci-create-persistent-intx-handler.patch queue-5.10/x86-entry_32-add-verw-just-before-userspace-transition.patch queue-5.10/vfio-fsl-mc-block-calling-interrupt-handler-without-trigger.patch queue-5.10/x86-bugs-add-asm-helpers-for-executing-verw.patch queue-5.10/vfio-pci-disable-auto-enable-of-exclusive-intx-irq.patch queue-5.10/vfio-pci-lock-external-intx-masking-ops.patch queue-5.10/vfio-introduce-interface-to-flush-virqfd-inject-workqueue.patch queue-5.10/kvm-x86-export-rfds_no-and-rfds_clear-to-guests.patch queue-5.10/x86-asm-add-_asm_rip-macro-for-x86-64-rip-suffix.patch queue-5.10/x86-entry_64-add-verw-just-before-userspace-transition.patch queue-5.10/x86-mmio-disable-kvm-mitigation-when-x86_feature_clear_cpu_buf-is-set.patch queue-5.10/x86-bugs-use-alternative-instead-of-mds_user_clear-static-key.patch queue-5.10/documentation-hw-vuln-add-documentation-for-rfds.patch queue-5.10/kvm-vmx-use-bt-jnc-i.e.-eflags.cf-to-select-vmresume-vs.-vmlaunch.patch queue-5.10/mm-migrate-set-swap-entry-values-of-thp-tail-pages-properly.patch queue-5.10/kvm-vmx-move-verw-closer-to-vmentry-for-mds-mitigation.patch queue-5.10/vfio-platform-create-persistent-irq-handlers.patch