On 2019/10/29 0:30, Matthew Wilcox wrote: > On Mon, Oct 28, 2019 at 05:18:09PM +0800, Xiang Zheng wrote: >> Commit "7ea7e98fd8d0" suggests that the "pci_lock" is sufficient, >> and all the callers of pci_wait_cfg() are wrapped with the "pci_lock". >> >> However, since the commit "cdcb33f98244" merged, the accesses to >> the pci_cfg_wait queue are not safe anymore. A "pci_lock" is >> insufficient and we need to hold an additional queue lock while >> read/write the wait queue. >> >> So let's use the add_wait_queue()/remove_wait_queue() instead of >> __add_wait_queue()/__remove_wait_queue(). > > As I said earlier, this reintroduces the deadlock addressed by > cdcb33f9824429a926b971bf041a6cec238f91ff > Thanks Matthew, sorry for that I did not understand the way to reintroduce the deadlock and sent this patch. If what I think is right, the possible deadlock may be caused by the condition in which there are three processes: *Process* *Acquired* *Wait For* wake_up_all() wq_head->lock pi_lock snbep_uncore_pci_read_counter() pi_lock pci_lock pci_wait_cfg() pci_lock wq_head->lock These processes suffer from the nested locks.:) But for this problem, what do you think about the solution below: diff --git a/drivers/pci/access.c b/drivers/pci/access.c index 2fccb5762c76..09342a74e5ea 100644 --- a/drivers/pci/access.c +++ b/drivers/pci/access.c @@ -207,14 +207,14 @@ static noinline void pci_wait_cfg(struct pci_dev *dev) { DECLARE_WAITQUEUE(wait, current); - __add_wait_queue(&pci_cfg_wait, &wait); do { set_current_state(TASK_UNINTERRUPTIBLE); raw_spin_unlock_irq(&pci_lock); + add_wait_queue(&pci_cfg_wait, &wait); schedule(); + remove_wait_queue(&pci_cfg_wait, &wait); raw_spin_lock_irq(&pci_lock); } while (dev->block_cfg_access); - __remove_wait_queue(&pci_cfg_wait, &wait); } /* Returns 0 on success, negative values indicate error. */ > . > -- Thanks, Xiang