Patch "PCI: vmd: Create domain symlink before pci_bus_add_devices()" has been added to the 5.15-stable tree

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



This is a note to let you know that I've just added the patch titled

    PCI: vmd: Create domain symlink before pci_bus_add_devices()

to the 5.15-stable tree which can be found at:
    http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=summary

The filename of the patch is:
     pci-vmd-create-domain-symlink-before-pci_bus_add_dev.patch
and it can be found in the queue-5.15 subdirectory.

If you, or anyone else, feels it should not be added to the stable tree,
please let <stable@xxxxxxxxxxxxxxx> know about it.



commit 070ae57c1e5982cca1f52156bbf786e19ff67481
Author: Jiwei Sun <sunjw10@xxxxxxxxxx>
Date:   Sun Jul 28 12:08:53 2024 -0400

    PCI: vmd: Create domain symlink before pci_bus_add_devices()
    
    [ Upstream commit f24c9bfcd423e2b2bb0d198456412f614ec2030a ]
    
    The vmd driver creates a "domain" symlink in sysfs for each VMD bridge.
    Previously this symlink was created after pci_bus_add_devices() added
    devices below the VMD bridge and emitted udev events to announce them to
    userspace.
    
    This led to a race between userspace consumers of the udev events and the
    kernel creation of the symlink.  One such consumer is mdadm, which
    assembles block devices into a RAID array, and for devices below a VMD
    bridge, mdadm depends on the "domain" symlink.
    
    If mdadm loses the race, it may be unable to assemble a RAID array, which
    may cause a boot failure or other issues, with complaints like this:
    
      (udev-worker)[2149]: nvme1n1: '/sbin/mdadm -I /dev/nvme1n1'(err) 'mdadm: Unable to get real path for '/sys/bus/pci/drivers/vmd/0000:c7:00.5/domain/device''
      (udev-worker)[2149]: nvme1n1: '/sbin/mdadm -I /dev/nvme1n1'(err) 'mdadm: /dev/nvme1n1 is not attached to Intel(R) RAID controller.'
      (udev-worker)[2149]: nvme1n1: '/sbin/mdadm -I /dev/nvme1n1'(err) 'mdadm: No OROM/EFI properties for /dev/nvme1n1'
      (udev-worker)[2149]: nvme1n1: '/sbin/mdadm -I /dev/nvme1n1'(err) 'mdadm: no RAID superblock on /dev/nvme1n1.'
      (udev-worker)[2149]: nvme1n1: Process '/sbin/mdadm -I /dev/nvme1n1' failed with exit code 1.
    
    This symptom prevents the OS from booting successfully.
    
    After a NVMe disk is probed/added by the nvme driver, udevd invokes mdadm
    to detect if there is a mdraid associated with this NVMe disk, and mdadm
    determines if a NVMe device is connected to a particular VMD domain by
    checking the "domain" symlink. For example:
    
      Thread A                   Thread B             Thread mdadm
      vmd_enable_domain
        pci_bus_add_devices
          __driver_probe_device
           ...
           work_on_cpu
             schedule_work_on
             : wakeup Thread B
                                 nvme_probe
                                 : wakeup scan_work
                                   to scan nvme disk
                                   and add nvme disk
                                   then wakeup udevd
                                                      : udevd executes
                                                        mdadm command
             flush_work                               main
             : wait for nvme_probe done                ...
          __driver_probe_device                        find_driver_devices
          : probe next nvme device                     : 1) Detect domain symlink
          ...                                            2) Find domain symlink
          ...                                               from vmd sysfs
          ...                                            3) Domain symlink not
          ...                                               created yet; failed
        sysfs_create_link
        : create domain symlink
    
    Create the VMD "domain" symlink before invoking pci_bus_add_devices() to
    avoid this race.
    
    Suggested-by: Adrian Huang <ahuang12@xxxxxxxxxx>
    Link: https://lore.kernel.org/linux-pci/20240605124844.24293-1-sjiwei@xxxxxxx
    Signed-off-by: Jiwei Sun <sunjw10@xxxxxxxxxx>
    Signed-off-by: Krzysztof Wilczyński <kwilczynski@xxxxxxxxxx>
    [bhelgaas: commit log]
    Signed-off-by: Bjorn Helgaas <bhelgaas@xxxxxxxxxx>
    Reviewed-by: Nirmal Patel <nirmal.patel@xxxxxxxxxxxxxxx>
    Signed-off-by: Sasha Levin <sashal@xxxxxxxxxx>

diff --git a/drivers/pci/controller/vmd.c b/drivers/pci/controller/vmd.c
index f49001ba96c7..10a078ef4799 100644
--- a/drivers/pci/controller/vmd.c
+++ b/drivers/pci/controller/vmd.c
@@ -798,6 +798,9 @@ static int vmd_enable_domain(struct vmd_dev *vmd, unsigned long features)
 	if (vmd->irq_domain)
 		dev_set_msi_domain(&vmd->bus->dev, vmd->irq_domain);
 
+	WARN(sysfs_create_link(&vmd->dev->dev.kobj, &vmd->bus->dev.kobj,
+			       "domain"), "Can't create symlink to domain\n");
+
 	vmd_acpi_begin();
 
 	pci_scan_child_bus(vmd->bus);
@@ -814,9 +817,6 @@ static int vmd_enable_domain(struct vmd_dev *vmd, unsigned long features)
 	pci_bus_add_devices(vmd->bus);
 
 	vmd_acpi_end();
-
-	WARN(sysfs_create_link(&vmd->dev->dev.kobj, &vmd->bus->dev.kobj,
-			       "domain"), "Can't create symlink to domain\n");
 	return 0;
 }
 
@@ -873,8 +873,8 @@ static void vmd_remove(struct pci_dev *dev)
 {
 	struct vmd_dev *vmd = pci_get_drvdata(dev);
 
-	sysfs_remove_link(&vmd->dev->dev.kobj, "domain");
 	pci_stop_root_bus(vmd->bus);
+	sysfs_remove_link(&vmd->dev->dev.kobj, "domain");
 	pci_remove_root_bus(vmd->bus);
 	vmd_cleanup_srcu(vmd);
 	vmd_detach_resources(vmd);




[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[Index of Archives]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux