On Thu, Sep 10, 2020 at 09:54:02AM +0800, Jiang Biao wrote: > Hi, > > On Thu, 10 Sep 2020 at 09:25, Bjorn Helgaas <helgaas@xxxxxxxxxx> wrote: > > > > On Mon, Aug 24, 2020 at 01:20:25PM +0800, Jiang Biao wrote: > > > From: Jiang Biao <benbjiang@xxxxxxxxxxx> > > > > > > pci_read_config() could block several ms in kernel space, mainly > > > caused by the while loop to call pci_user_read_config_dword(). > > > Singel pci_user_read_config_dword() loop could consume 130us+, > > > | pci_user_read_config_dword() { > > > | _raw_spin_lock_irq() { > > > ! 136.698 us | native_queued_spin_lock_slowpath(); > > > ! 137.582 us | } > > > | pci_read() { > > > | raw_pci_read() { > > > | pci_conf1_read() { > > > 0.230 us | _raw_spin_lock_irqsave(); > > > 0.035 us | _raw_spin_unlock_irqrestore(); > > > 8.476 us | } > > > 8.790 us | } > > > 9.091 us | } > > > ! 147.263 us | } > > > and dozens of the loop could consume ms+. > > > > > > If we execute some lspci commands concurrently, ms+ scheduling > > > latency could be detected. > > > > > > Add scheduling chance in the loop to improve the latency. > > > > Thanks for the patch, this makes a lot of sense. > > > > Shouldn't we do the same in pci_write_config()? > Yes, IMHO, that could be helpful too. If it's feasible, it would be nice to actually verify that it makes a difference. I know config writes should be faster than reads, but they're certainly not as fast as a CPU can pump out data, so there must be *some* mechanism that slows the CPU down. Bjorn