Re: [PATCH RFC 0/7] add socket to netdev page frag recycling support

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Wed, Aug 18, 2021 at 5:33 AM Yunsheng Lin <linyunsheng@xxxxxxxxxx> wrote:
>
> This patchset adds the socket to netdev page frag recycling
> support based on the busy polling and page pool infrastructure.

I really do not see how this can scale to thousands of sockets.

tcp_mem[] defaults to ~ 9 % of physical memory.

If you now run tests with thousands of sockets, their skbs will
consume Gigabytes
of memory on typical servers, now backed by order-0 pages (instead of
current order-3 pages)
So IOMMU costs will actually be much bigger.

Are we planning to use Gigabyte sized page pools for NIC ?

Have you tried instead to make TCP frags twice bigger ?
This would require less IOMMU mappings.
(Note: This could require some mm help, since PAGE_ALLOC_COSTLY_ORDER
is currently 3, not 4)

diff --git a/net/core/sock.c b/net/core/sock.c
index a3eea6e0b30a7d43793f567ffa526092c03e3546..6b66b51b61be9f198f6f1c4a3d81b57fa327986a
100644
--- a/net/core/sock.c
+++ b/net/core/sock.c
@@ -2560,7 +2560,7 @@ static void sk_leave_memory_pressure(struct sock *sk)
        }
 }

-#define SKB_FRAG_PAGE_ORDER    get_order(32768)
+#define SKB_FRAG_PAGE_ORDER    get_order(65536)
 DEFINE_STATIC_KEY_FALSE(net_high_order_alloc_disable_key);

 /**



>
> The profermance improve from 30Gbit to 41Gbit for one thread iperf
> tcp flow, and the CPU usages decreases about 20% for four threads
> iperf flow with 100Gb line speed in IOMMU strict mode.
>
> The profermance improve about 2.5% for one thread iperf tcp flow
> in IOMMU passthrough mode.
>
> Yunsheng Lin (7):
>   page_pool: refactor the page pool to support multi alloc context
>   skbuff: add interface to manipulate frag count for tx recycling
>   net: add NAPI api to register and retrieve the page pool ptr
>   net: pfrag_pool: add pfrag pool support based on page pool
>   sock: support refilling pfrag from pfrag_pool
>   net: hns3: support tx recycling in the hns3 driver
>   sysctl_tcp_use_pfrag_pool
>
>  drivers/net/ethernet/hisilicon/hns3/hns3_enet.c | 32 +++++----
>  include/linux/netdevice.h                       |  9 +++
>  include/linux/skbuff.h                          | 43 +++++++++++-
>  include/net/netns/ipv4.h                        |  1 +
>  include/net/page_pool.h                         | 15 ++++
>  include/net/pfrag_pool.h                        | 24 +++++++
>  include/net/sock.h                              |  1 +
>  net/core/Makefile                               |  1 +
>  net/core/dev.c                                  | 34 ++++++++-
>  net/core/page_pool.c                            | 86 ++++++++++++-----------
>  net/core/pfrag_pool.c                           | 92 +++++++++++++++++++++++++
>  net/core/sock.c                                 | 12 ++++
>  net/ipv4/sysctl_net_ipv4.c                      |  7 ++
>  net/ipv4/tcp.c                                  | 34 ++++++---
>  14 files changed, 325 insertions(+), 66 deletions(-)
>  create mode 100644 include/net/pfrag_pool.h
>  create mode 100644 net/core/pfrag_pool.c
>
> --
> 2.7.4
>



[Index of Archives]     [Linux Samsung SoC]     [Linux Rockchip SoC]     [Linux Actions SoC]     [Linux for Synopsys ARC Processors]     [Linux NFS]     [Linux NILFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]


  Powered by Linux