From: Jiwei Sun <sunjw10@xxxxxxxxxx> [ Upstream commit f24c9bfcd423e2b2bb0d198456412f614ec2030a ] The vmd driver creates a "domain" symlink in sysfs for each VMD bridge. Previously this symlink was created after pci_bus_add_devices() added devices below the VMD bridge and emitted udev events to announce them to userspace. This led to a race between userspace consumers of the udev events and the kernel creation of the symlink. One such consumer is mdadm, which assembles block devices into a RAID array, and for devices below a VMD bridge, mdadm depends on the "domain" symlink. If mdadm loses the race, it may be unable to assemble a RAID array, which may cause a boot failure or other issues, with complaints like this: (udev-worker)[2149]: nvme1n1: '/sbin/mdadm -I /dev/nvme1n1'(err) 'mdadm: Unable to get real path for '/sys/bus/pci/drivers/vmd/0000:c7:00.5/domain/device'' (udev-worker)[2149]: nvme1n1: '/sbin/mdadm -I /dev/nvme1n1'(err) 'mdadm: /dev/nvme1n1 is not attached to Intel(R) RAID controller.' (udev-worker)[2149]: nvme1n1: '/sbin/mdadm -I /dev/nvme1n1'(err) 'mdadm: No OROM/EFI properties for /dev/nvme1n1' (udev-worker)[2149]: nvme1n1: '/sbin/mdadm -I /dev/nvme1n1'(err) 'mdadm: no RAID superblock on /dev/nvme1n1.' (udev-worker)[2149]: nvme1n1: Process '/sbin/mdadm -I /dev/nvme1n1' failed with exit code 1. This symptom prevents the OS from booting successfully. After a NVMe disk is probed/added by the nvme driver, udevd invokes mdadm to detect if there is a mdraid associated with this NVMe disk, and mdadm determines if a NVMe device is connected to a particular VMD domain by checking the "domain" symlink. For example: Thread A Thread B Thread mdadm vmd_enable_domain pci_bus_add_devices __driver_probe_device ... work_on_cpu schedule_work_on : wakeup Thread B nvme_probe : wakeup scan_work to scan nvme disk and add nvme disk then wakeup udevd : udevd executes mdadm command flush_work main : wait for nvme_probe done ... __driver_probe_device find_driver_devices : probe next nvme device : 1) Detect domain symlink ... 2) Find domain symlink ... from vmd sysfs ... 3) Domain symlink not ... created yet; failed sysfs_create_link : create domain symlink Create the VMD "domain" symlink before invoking pci_bus_add_devices() to avoid this race. Suggested-by: Adrian Huang <ahuang12@xxxxxxxxxx> Link: https://lore.kernel.org/linux-pci/20240605124844.24293-1-sjiwei@xxxxxxx Signed-off-by: Jiwei Sun <sunjw10@xxxxxxxxxx> Signed-off-by: Krzysztof Wilczyński <kwilczynski@xxxxxxxxxx> [bhelgaas: commit log] Signed-off-by: Bjorn Helgaas <bhelgaas@xxxxxxxxxx> Reviewed-by: Nirmal Patel <nirmal.patel@xxxxxxxxxxxxxxx> Signed-off-by: Sasha Levin <sashal@xxxxxxxxxx> --- drivers/pci/controller/vmd.c | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) diff --git a/drivers/pci/controller/vmd.c b/drivers/pci/controller/vmd.c index f49001ba96c75..10a078ef4799d 100644 --- a/drivers/pci/controller/vmd.c +++ b/drivers/pci/controller/vmd.c @@ -798,6 +798,9 @@ static int vmd_enable_domain(struct vmd_dev *vmd, unsigned long features) if (vmd->irq_domain) dev_set_msi_domain(&vmd->bus->dev, vmd->irq_domain); + WARN(sysfs_create_link(&vmd->dev->dev.kobj, &vmd->bus->dev.kobj, + "domain"), "Can't create symlink to domain\n"); + vmd_acpi_begin(); pci_scan_child_bus(vmd->bus); @@ -814,9 +817,6 @@ static int vmd_enable_domain(struct vmd_dev *vmd, unsigned long features) pci_bus_add_devices(vmd->bus); vmd_acpi_end(); - - WARN(sysfs_create_link(&vmd->dev->dev.kobj, &vmd->bus->dev.kobj, - "domain"), "Can't create symlink to domain\n"); return 0; } @@ -873,8 +873,8 @@ static void vmd_remove(struct pci_dev *dev) { struct vmd_dev *vmd = pci_get_drvdata(dev); - sysfs_remove_link(&vmd->dev->dev.kobj, "domain"); pci_stop_root_bus(vmd->bus); + sysfs_remove_link(&vmd->dev->dev.kobj, "domain"); pci_remove_root_bus(vmd->bus); vmd_cleanup_srcu(vmd); vmd_detach_resources(vmd); -- 2.43.0