Re: [PATCH] PCI ASPM: more detailed ASPM configuration

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Shaohua Li wrote:
On Fri, Jun 26, 2009 at 10:59:20AM +0800, Kenji Kaneshige wrote:
Shaohua Li wrote:
On Fri, Jun 19, 2009 at 01:28:44PM +0800, Kenji Kaneshige wrote:
Shaohua Li wrote:
On Fri, Jun 19, 2009 at 11:19:49AM +0800, Kenji Kaneshige wrote:
Shaohua Li wrote:
On Fri, Jun 19, 2009 at 05:28:41AM +0800, Jesse Barnes wrote:
On Thu, 21 May 2009 11:10:06 +0900
Kenji Kaneshige <kaneshige.kenji@xxxxxxxxxxxxxx> wrote:

Hi,

This patch changes ASPM driver to enable more detailed ASPM
configuration. With this patch,

- ASPM can be enabled partially in the hierarchy.
- L0s can be managed for each direction (upstream direction and
  downstream direction) of the link.
Well, these are nice features, but what kind of machine does you test?
Does it have several level hierarchy? The most powerful system I saw
just has two levels, which sounds not worth adding partial aspm.
Thank you very much for comments.

I think it is useful also for the system that doesn't have so
many levels of hierarchy, such as mobile systems. Here is "lspci -t"
output on my environment with some comments. My system also has
just two levels, but with the patch, upstream L0s is enabled on the
link 0000:15.00.0 (switch downstream port) to 0000:16:00.0 (endpoint)
in powersave mode.
Can you send me the output of lspci -vvvxxx please?

Here it is. Please note that I'm using pcie_aspm=force option.
Hi,
can you please split the patch into two patches, then I will have more detail
review/test. Also please don't do cleanup in the patches for new feature.
I had a simple look at the patch, and found at least one issue:
A link might have multiple child links, we need check all child links's latency to
enable the parent's ASPM.  In pcie_config_aspm_path(), the patch seems not check
peer link.

My understanding is that we can enable parent ASPM even if one
or more child links don't meet the latency requirement and ASPM
is disabled on them. In this case, parent link will never enter
the upstream direction L0s nor L1 state because parent link's
downstream component (switch upstream port) never becomes idle.
Yes, for the case you pointed out, not enabling parent link is always better.

Why do you think it's better?

Doing this (not enabling parent link) would needs complicated
handling and increase the cost very much.

Let's take this way:
1. always enable downstream's L0s (root port or switch downstream port)
if downstream device supports L0s.

If we always enable L0s on downstream port, downstream direction
L0s will be always enabled. Do you mean we don't need to check
latency for downstream direction L0s?

2. like what you said, enabling endpoint's link if its link meets latency
requirement but its parent link doesn't.

I'm sorry but I do not understand the point of your suggestion.
What is the benefit? What do you think about switch upstream ports?

By the way, I'm very sorry for the delay of sending separated
patches. I've already separated the patch into several patches
and compile test has been done. But those are not tested one by
one yet. If it's OK for you, I'll send them before testing.

Thanks,
Kenji Kaneshige



--
To unsubscribe from this list: send the line "unsubscribe linux-pci" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html

[Index of Archives]     [DMA Engine]     [Linux Coverity]     [Linux USB]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [Greybus]

  Powered by Linux