On 05/11/2012 07:54 AM, Amos Kong wrote: > On 05/11/2012 02:55 AM, Michael S. Tsirkin wrote: >> On Fri, May 11, 2012 at 01:09:13AM +0800, Jiang Liu wrote: >>> On 05/10/2012 11:44 PM, Amos Kong wrote: >>> >>>> diff --git a/drivers/pci/hotplug/acpiphp_glue.c b/drivers/pci/hotplug/acpiphp_glue.c >>>> index 806c44f..a7442d9 100644 >>>> --- a/drivers/pci/hotplug/acpiphp_glue.c >>>> +++ b/drivers/pci/hotplug/acpiphp_glue.c >>>> @@ -885,7 +885,7 @@ static void disable_bridges(struct pci_bus *bus) >>>> static int disable_device(struct acpiphp_slot *slot) >>>> { >>>> struct acpiphp_func *func; >>>> - struct pci_dev *pdev; >>>> + struct pci_dev *pdev, *tmp; >>>> struct pci_bus *bus = slot->bridge->pci_bus; >>>> >>>> /* The slot will be enabled when func 0 is added, so check >>>> @@ -902,9 +902,10 @@ static int disable_device(struct acpiphp_slot *slot) >>>> func->bridge = NULL; >>>> } >>>> >>>> - pdev = pci_get_slot(slot->bridge->pci_bus, >>>> - PCI_DEVFN(slot->device, func->function)); >>>> - if (pdev) { >>>> + list_for_each_entry_safe(pdev, tmp, &bus->devices, bus_list) { >>>> + if (PCI_SLOT(pdev->devfn) != slot->device) >>>> + continue; >>>> + >>> The pci_bus_sem lock should be acquired when walking the bus->devices list. >>> Otherwise it may cause invalid memory access if another thread is modifying >>> the bus->devices list concurrently. pci_bus_sem lock is only request for writing &bus->devices list, right ? and this protection already exists in pci_destory_dev(). static int disable_device(struct acpiphp_slot *slot) \_ list_for_each_entry_safe(pdev, tmp, &bus->devices, bus_list) { \_ __pci_remove_bus_device(pdev); \_ pci_destroy_dev(dev); static void pci_destroy_dev(struct pci_dev *dev) { /* Remove the device from the device lists, and prevent any further * list accesses from this device */ down_write(&pci_bus_sem); list_del(&dev->bus_list); dev->bus_list.next = dev->bus_list.prev = NULL; up_write(&pci_bus_sem); ... } -- Amos. -- To unsubscribe from this list: send the line "unsubscribe linux-pci" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html