On Wed, Apr 03, 2013 at 04:14:17PM +0800, Joseph Lo wrote: > On Wed, 2013-04-03 at 15:54 +0800, Thierry Reding wrote: > > * PGP Signed by an unknown key > > > > Hi Joseph, > > > > I didn't want to hijack the other thread, so I'm starting a new one. > > I've recently been trying to get the PCIe ethernet device on TrimSlice > > working on top of the Tegra PCIe rework patches that I've been carrying > > for quite a while. > > > > A bit of background: when I last tried this back in January things still > > worked fine, but I noticed that they aren't working on recent linux-next > > versions. I was able to track down next-20130123 as the last working > > version and next-20130128 as the first broken one (anything in between > > didn't boot properly). > > > > It turns out that the introduction of CPU idle (LP2) seems to have > > introduced this breakage. Normally what I'd do is: > > > > $ ifconfig eth0 up > > > > and wait for a few seconds for the kernel to report that a link has been > > detected, after which running > > > > $ dhcpcd eth0 > > > > will successfully obtain an IP address. However, with the CPU idle > > support enabled, the network interface no longer detects a link. The > > reason for this is that the MSI that would usually occur after link > > detection by the hardware never occurs when LP2 is enabled. > > > > I've verified on top of next-20130402 that commenting out entry "1" in > > tegra_idle_states (LP2) in cpuidle-tegra20.c "fixes" the issue. I'm able > > to obtain an IP address via DHCP and use the network interface as usual. > > > > I already talked to Stephen and Peter about this on IRC and none of us > > could come up with a good explanation. Since you wrote the CPU idle > > support I thought you might be able to shed some light on this. > > > Do you mean after enabling CPU idle LP2 the PCIe ethernet driver not > work anymore? The driver still works, there is no crash or anything. But the MSI is no longer received by the CPU. I'm not even sure the network driver is in any way involved here, because the MSI functionality is provided by the Tegra PCIe controller driver. > Does the driver use any runtime PM and generic power domain that hook > to the LP2 state of CPU idle? The driver does indeed use the generic pm_runtime_*() functions. I'm not sure how much they can influence the LP2 state, though. > Can you point me out the driver source? Because I don't have any device > that support PCIe interface. I may not have chance to repo this. The network driver is drivers/net/ethernet/realtek/r8169.c, and you can find the PCIe controller driver (which provides the MSI functionality) here[0]. > But normally, the idle driver should not break driver. I agree. Some of the speculations on IRC included that the MSI might be lost due to the interrupt controller not waking up the CPU properly to deliver the interrupt or maybe the interrupt was delivered to the wrong CPU and therefore lost. But I don't know either the interrupt controller or CPU idle in enough detail to substantiate those. > BTW, you can also check the minimal interval to keep the connection > alive. For Tegra20, it need at lease 10 mS for CPU cluster power down. > It means when CPU go into LP2, even there is an interrupt wake up him > immediately. The CPU need to wait for power ready. It's 10mS. Only > Tegra20 had this limitation. Okay, theoretically we could have something like the following sequence: 1) user runs "ifconfig eth0 up" 2) driver programs network interface to bring up link 3) driver waits for IRQ 4) CPU goes to idle 5) MSI is received 6) CPU is woken up by interrupt controller 7) CPU needs 10 ms before power is ready And I'd expect step 8 to be: 8) interrupt delivered to CPU No matter how long the CPU needs to wake up. Or maybe I didn't understand what you were saying. Thierry [0]: https://gitorious.org/thierryreding/linux/blobs/tegra/next/drivers/pci/host/pci-tegra.c
Attachment:
pgp0eWv8FNgjn.pgp
Description: PGP signature