Re: [PATCH net-next 6/6] ice: Document tx_scheduling_layers parameter

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 





On 4/23/2024 2:37 PM, Bagas Sanjaya wrote:
On Mon, Apr 22, 2024 at 01:39:11PM -0700, Tony Nguyen wrote:
+       The default 9-layer tree topology was deemed best for most workloads,
+       as it gives an optimal ratio of performance to configurability. However,
+       for some specific cases, this 9-layer topology might not be desired.
+       One example would be sending traffic to queues that are not a multiple
+       of 8. Because the maximum radix is limited to 8 in 9-layer topology,
+       the 9th queue has a different parent than the rest, and it's given
+       more bandwidth credits. This causes a problem when the system is
+       sending traffic to 9 queues:
+
+       | tx_queue_0_packets: 24163396
+       | tx_queue_1_packets: 24164623
+       | tx_queue_2_packets: 24163188
+       | tx_queue_3_packets: 24163701
+       | tx_queue_4_packets: 24163683
+       | tx_queue_5_packets: 24164668
+       | tx_queue_6_packets: 23327200
+       | tx_queue_7_packets: 24163853
+       | tx_queue_8_packets: 91101417 < Too much traffic is sent from 9th
+
<snipped>...
+       To verify that value has been set:
+       $ devlink dev param show pci/0000:16:00.0 name tx_scheduling_layers

For consistency with other code blocks, format above as such:

---- >8 ----
diff --git a/Documentation/networking/devlink/ice.rst b/Documentation/networking/devlink/ice.rst
index 830c04354222f8..0039ca45782400 100644
--- a/Documentation/networking/devlink/ice.rst
+++ b/Documentation/networking/devlink/ice.rst
@@ -41,15 +41,17 @@ Parameters
         more bandwidth credits. This causes a problem when the system is
         sending traffic to 9 queues:
- | tx_queue_0_packets: 24163396
-       | tx_queue_1_packets: 24164623
-       | tx_queue_2_packets: 24163188
-       | tx_queue_3_packets: 24163701
-       | tx_queue_4_packets: 24163683
-       | tx_queue_5_packets: 24164668
-       | tx_queue_6_packets: 23327200
-       | tx_queue_7_packets: 24163853
-       | tx_queue_8_packets: 91101417 < Too much traffic is sent from 9th
+       .. code-block:: shell
+
+         tx_queue_0_packets: 24163396
+         tx_queue_1_packets: 24164623
+         tx_queue_2_packets: 24163188
+         tx_queue_3_packets: 24163701
+         tx_queue_4_packets: 24163683
+         tx_queue_5_packets: 24164668
+         tx_queue_6_packets: 23327200
+         tx_queue_7_packets: 24163853
+         tx_queue_8_packets: 91101417 < Too much traffic is sent from 9th
To address this need, you can switch to a 5-layer topology, which
         changes the maximum topology radix to 512. With this enhancement,
@@ -67,7 +69,10 @@ Parameters
         You must do PCI slot powercycle for the selected topology to take effect.
To verify that value has been set:
-       $ devlink dev param show pci/0000:16:00.0 name tx_scheduling_layers
+
+       .. code-block:: shell
+
+         $ devlink dev param show pci/0000:16:00.0 name tx_scheduling_layers
Info versions
  =============

Thanks.


Thank You for reporting that. I will verify this issue soon.




[Index of Archives]     [Kernel Newbies]     [Security]     [Netfilter]     [Bugtraq]     [Linux FS]     [Yosemite Forum]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Samba]     [Video 4 Linux]     [Device Mapper]     [Linux Resources]

  Powered by Linux