On Mon, May 08, 2023 at 12:43:26PM -0700, Emil Tantilov wrote: > From: Joshua Hay <joshua.a.hay@xxxxxxxxx> > > Add PCI callback to configure SRIOV and add the necessary support > to initialize the requested number of VFs by sending the virtchnl > message to the device Control Plane. > > Add other ndo ops supported by the driver such as features_check, > set_rx_mode, validate_addr, set_mac_address, change_mtu, get_stats64, > set_features, and tx_timeout. Initialize the statistics task which > requests the queue related statistics to the CP. Add loopback > and promiscuous mode support and the respective virtchnl messages. > > Finally, add documentation and build support for the driver. > > Signed-off-by: Joshua Hay <joshua.a.hay@xxxxxxxxx> > Co-developed-by: Alan Brady <alan.brady@xxxxxxxxx> > Signed-off-by: Alan Brady <alan.brady@xxxxxxxxx> > Co-developed-by: Madhu Chittim <madhu.chittim@xxxxxxxxx> > Signed-off-by: Madhu Chittim <madhu.chittim@xxxxxxxxx> > Co-developed-by: Phani Burra <phani.r.burra@xxxxxxxxx> > Signed-off-by: Phani Burra <phani.r.burra@xxxxxxxxx> > Co-developed-by: Pavan Kumar Linga <pavan.kumar.linga@xxxxxxxxx> > Signed-off-by: Pavan Kumar Linga <pavan.kumar.linga@xxxxxxxxx> > Reviewed-by: Sridhar Samudrala <sridhar.samudrala@xxxxxxxxx> > Reviewed-by: Willem de Bruijn <willemb@xxxxxxxxxx> > --- > .../device_drivers/ethernet/intel/idpf.rst | 162 +++++ > drivers/net/ethernet/intel/Kconfig | 10 + > drivers/net/ethernet/intel/Makefile | 1 + > drivers/net/ethernet/intel/idpf/idpf.h | 40 ++ > drivers/net/ethernet/intel/idpf/idpf_lib.c | 642 +++++++++++++++++- > drivers/net/ethernet/intel/idpf/idpf_main.c | 17 + > drivers/net/ethernet/intel/idpf/idpf_txrx.c | 26 + > drivers/net/ethernet/intel/idpf/idpf_txrx.h | 2 + > .../net/ethernet/intel/idpf/idpf_virtchnl.c | 193 ++++++ You forget to add toctree entry for the doc: ---- >8 ---- diff --git a/Documentation/networking/device_drivers/ethernet/index.rst b/Documentation/networking/device_drivers/ethernet/index.rst index 417ca514a4d057..5a7e377ae2b7f5 100644 --- a/Documentation/networking/device_drivers/ethernet/index.rst +++ b/Documentation/networking/device_drivers/ethernet/index.rst @@ -30,6 +30,7 @@ Contents: intel/e1000 intel/e1000e intel/fm10k + intel/idpf intel/igb intel/igbvf intel/ixgbe > +Contents > +======== > + > +- Overview > +- Identifying Your Adapter > +- Additional Features & Configurations > +- Performance Optimization Automatically generate table of contents instead: ---- >8 ---- diff --git a/Documentation/networking/device_drivers/ethernet/intel/idpf.rst b/Documentation/networking/device_drivers/ethernet/intel/idpf.rst index ae5e6430d0e636..6f7c8e15fa20df 100644 --- a/Documentation/networking/device_drivers/ethernet/intel/idpf.rst +++ b/Documentation/networking/device_drivers/ethernet/intel/idpf.rst @@ -7,14 +7,7 @@ idpf Linux* Base Driver for the Intel(R) Infrastructure Data Path Function Intel idpf Linux driver. Copyright(C) 2023 Intel Corporation. -Contents -======== - -- Overview -- Identifying Your Adapter -- Additional Features & Configurations -- Performance Optimization - +.. contents:: The idpf driver serves as both the Physical Function (PF) and Virtual Function (VF) driver for the Intel(R) Infrastructure Data Path Function. > +Identifying Your Adapter > +======================== > +For information on how to identify your adapter, and for the latest Intel > +network drivers, refer to the Intel Support website: > +http://www.intel.com/support What support article(s) do you mean on identifying the adapter? > + > + > +Additional Features and Configurations > +====================================== > + > +ethtool > +------- > +The driver utilizes the ethtool interface for driver configuration and > +diagnostics, as well as displaying statistical information. The latest ethtool > +version is required for this functionality. Download it at: > +https://kernel.org/pub/software/network/ethtool/ "... If you don't have one yet, you can obtain it at ..." > + > + > +Viewing Link Messages > +--------------------- > +Link messages will not be displayed to the console if the distribution is > +restricting system messages. In order to see network driver link messages on > +your console, set dmesg to eight by entering the following: > + > +# dmesg -n 8 > + > +NOTE: This setting is not saved across reboots. How can I permanently save above dmesg setting? > + > + > +Jumbo Frames > +------------ > +Jumbo Frames support is enabled by changing the Maximum Transmission Unit (MTU) > +to a value larger than the default value of 1500. > + > +Use the ip command to increase the MTU size. For example, enter the following > +where <ethX> is the interface number: > + > +# ip link set mtu 9000 dev <ethX> > +# ip link set up dev <ethX> For command line snippets, use literal code blocks: ---- >8 ---- diff --git a/Documentation/networking/device_drivers/ethernet/intel/idpf.rst b/Documentation/networking/device_drivers/ethernet/intel/idpf.rst index 0a2982fb6f0045..30148d8cf34b14 100644 --- a/Documentation/networking/device_drivers/ethernet/intel/idpf.rst +++ b/Documentation/networking/device_drivers/ethernet/intel/idpf.rst @@ -48,9 +48,9 @@ Viewing Link Messages --------------------- Link messages will not be displayed to the console if the distribution is restricting system messages. In order to see network driver link messages on -your console, set dmesg to eight by entering the following: +your console, set dmesg to eight by entering the following:: -# dmesg -n 8 + # dmesg -n 8 NOTE: This setting is not saved across reboots. @@ -61,10 +61,10 @@ Jumbo Frames support is enabled by changing the Maximum Transmission Unit (MTU) to a value larger than the default value of 1500. Use the ip command to increase the MTU size. For example, enter the following -where <ethX> is the interface number: +where <ethX> is the interface number:: -# ip link set mtu 9000 dev <ethX> -# ip link set up dev <ethX> + # ip link set mtu 9000 dev <ethX> + # ip link set up dev <ethX> NOTE: The maximum MTU setting for jumbo frames is 9706. This corresponds to the maximum jumbo frame size of 9728 bytes. @@ -92,40 +92,40 @@ is tuned for general workloads. The user can customize the interrupt rate control for specific workloads, via ethtool, adjusting the number of microseconds between interrupts. -To set the interrupt rate manually, you must disable adaptive mode: +To set the interrupt rate manually, you must disable adaptive mode:: -# ethtool -C <ethX> adaptive-rx off adaptive-tx off + # ethtool -C <ethX> adaptive-rx off adaptive-tx off For lower CPU utilization: - Disable adaptive ITR and lower Rx and Tx interrupts. The examples below affect every queue of the specified interface. - Setting rx-usecs and tx-usecs to 80 will limit interrupts to about - 12,500 interrupts per second per queue: + 12,500 interrupts per second per queue:: - # ethtool -C <ethX> adaptive-rx off adaptive-tx off rx-usecs 80 - tx-usecs 80 + # ethtool -C <ethX> adaptive-rx off adaptive-tx off rx-usecs 80 + tx-usecs 80 For reduced latency: - Disable adaptive ITR and ITR by setting rx-usecs and tx-usecs to 0 - using ethtool: + using ethtool:: - # ethtool -C <ethX> adaptive-rx off adaptive-tx off rx-usecs 0 - tx-usecs 0 + # ethtool -C <ethX> adaptive-rx off adaptive-tx off rx-usecs 0 + tx-usecs 0 Per-queue interrupt rate settings: - The following examples are for queues 1 and 3, but you can adjust other queues. - To disable Rx adaptive ITR and set static Rx ITR to 10 microseconds or - about 100,000 interrupts/second, for queues 1 and 3: + about 100,000 interrupts/second, for queues 1 and 3:: - # ethtool --per-queue <ethX> queue_mask 0xa --coalesce adaptive-rx off - rx-usecs 10 + # ethtool --per-queue <ethX> queue_mask 0xa --coalesce adaptive-rx off + rx-usecs 10 - - To show the current coalesce settings for queues 1 and 3: + - To show the current coalesce settings for queues 1 and 3:: - # ethtool --per-queue <ethX> queue_mask 0xa --show-coalesce + # ethtool --per-queue <ethX> queue_mask 0xa --show-coalesce @@ -139,9 +139,9 @@ helpful to optimize performance in VMs. device's local_cpulist: /sys/class/net/<ethX>/device/local_cpulist. - Configure as many Rx/Tx queues in the VM as available. (See the idpf driver - documentation for the number of queues supported.) For example: + documentation for the number of queues supported.) For example:: - # ethtool -L <virt_interface> rx <max> tx <max> + # ethtool -L <virt_interface> rx <max> tx <max> Support > + > +NOTE: The maximum MTU setting for jumbo frames is 9706. This corresponds to the > +maximum jumbo frame size of 9728 bytes. > + > +NOTE: This driver will attempt to use multiple page sized buffers to receive > +each jumbo packet. This should help to avoid buffer starvation issues when > +allocating receive packets. > + > +NOTE: Packet loss may have a greater impact on throughput when you use jumbo > +frames. If you observe a drop in performance after enabling jumbo frames, > +enabling flow control may mitigate the issue. Sphinx has admonition directive facility to style above notes: ---- >8 ---- diff --git a/Documentation/networking/device_drivers/ethernet/intel/idpf.rst b/Documentation/networking/device_drivers/ethernet/intel/idpf.rst index 30148d8cf34b14..ae5e6430d0e636 100644 --- a/Documentation/networking/device_drivers/ethernet/intel/idpf.rst +++ b/Documentation/networking/device_drivers/ethernet/intel/idpf.rst @@ -52,7 +52,8 @@ your console, set dmesg to eight by entering the following:: # dmesg -n 8 -NOTE: This setting is not saved across reboots. +.. note:: + This setting is not saved across reboots. Jumbo Frames @@ -66,16 +67,19 @@ where <ethX> is the interface number:: # ip link set mtu 9000 dev <ethX> # ip link set up dev <ethX> -NOTE: The maximum MTU setting for jumbo frames is 9706. This corresponds to the -maximum jumbo frame size of 9728 bytes. +.. note:: + The maximum MTU setting for jumbo frames is 9706. This corresponds to the + maximum jumbo frame size of 9728 bytes. -NOTE: This driver will attempt to use multiple page sized buffers to receive -each jumbo packet. This should help to avoid buffer starvation issues when -allocating receive packets. +.. note:: + This driver will attempt to use multiple page sized buffers to receive + each jumbo packet. This should help to avoid buffer starvation issues when + allocating receive packets. -NOTE: Packet loss may have a greater impact on throughput when you use jumbo -frames. If you observe a drop in performance after enabling jumbo frames, -enabling flow control may mitigate the issue. +.. note:: + Packet loss may have a greater impact on throughput when you use jumbo + frames. If you observe a drop in performance after enabling jumbo frames, + enabling flow control may mitigate the issue. Performance Optimization Thanks. -- An old man doll... just what I always wanted! - Clara
Attachment:
signature.asc
Description: PGP signature