Re: [PATCH net-next v3 3/3] page_pool: fix IOMMU crash when driver has already unbound

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




On 26/10/2024 09.33, Yunsheng Lin wrote:
On 2024/10/25 22:07, Jesper Dangaard Brouer wrote:

...


You and Jesper seems to be mentioning a possible fact that there might
be 'hundreds of gigs of memory' needed for inflight pages, it would be nice
to provide more info or reasoning above why 'hundreds of gigs of memory' is
needed here so that we don't do a over-designed thing to support recording
unlimited in-flight pages if the driver unbound stalling turns out impossible
and the inflight pages do need to be recorded.

I don't have a concrete example of a use that will blow the limit you
are setting (but maybe Jesper does), I am simply objecting to the
arbitrary imposing of any limit at all. It smells a lot of "640k ought
to be enough for anyone".


As I wrote before. In *production* I'm seeing TCP memory reach 24 GiB
(on machines with 384GiB memory). I have attached a grafana screenshot
to prove what I'm saying.

As my co-worker Mike Freemon, have explain to me (and more details in
blogposts[1]). It is no coincident that graph have a strange "sealing"
close to 24 GiB (on machines with 384GiB total memory).  This is because
TCP network stack goes into a memory "under pressure" state when 6.25%
of total memory is used by TCP-stack. (Detail: The system will stay in
that mode until allocated TCP memory falls below 4.68% of total memory).

  [1] https://blog.cloudflare.com/unbounded-memory-usage-by-tcp-for-receive-buffers-and-how-we-fixed-it/

Thanks for the info.

Some more info from production servers.

(I'm amazed what we can do with a simple bpftrace script, Cc Viktor)

In below bpftrace script/oneliner I'm extracting the inflight count, for
all page_pool's in the system, and storing that in a histogram hash.

sudo bpftrace -e '
 rawtracepoint:page_pool_state_release { @cnt[probe]=count();
  @cnt_total[probe]=count();
  $pool=(struct page_pool*)arg0;
  $release_cnt=(uint32)arg2;
  $hold_cnt=$pool->pages_state_hold_cnt;
  $inflight_cnt=(int32)($hold_cnt - $release_cnt);
  @inflight=hist($inflight_cnt);
 }
 interval:s:1 {time("\n%H:%M:%S\n");
  print(@cnt); clear(@cnt);
  print(@inflight);
  print(@cnt_total);
 }'

The page_pool behavior depend on how NIC driver use it, so I've run this on two prod servers with drivers bnxt and mlx5, on a 6.6.51 kernel.

Driver: bnxt_en
- kernel 6.6.51

@cnt[rawtracepoint:page_pool_state_release]: 8447
@inflight:
[0]             507 |                                        |
[1]             275 |                                        |
[2, 4)          261 |                                        |
[4, 8)          215 |                                        |
[8, 16)         259 |                                        |
[16, 32)        361 |                                        |
[32, 64)        933 |                                        |
[64, 128)      1966 |                                        |
[128, 256)   937052 |@@@@@@@@@                               |
[256, 512)  5178744 |@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@|
[512, 1K)     73908 |                                        |
[1K, 2K)    1220128 |@@@@@@@@@@@@                            |
[2K, 4K)    1532724 |@@@@@@@@@@@@@@@                         |
[4K, 8K)    1849062 |@@@@@@@@@@@@@@@@@@                      |
[8K, 16K)   1466424 |@@@@@@@@@@@@@@                          |
[16K, 32K)   858585 |@@@@@@@@                                |
[32K, 64K)   693893 |@@@@@@                                  |
[64K, 128K)  170625 |@                                       |

Driver: mlx5_core
 - Kernel: 6.6.51

@cnt[rawtracepoint:page_pool_state_release]: 1975
@inflight:
[128, 256)         28293 |@@@@                               |
[256, 512)        184312 |@@@@@@@@@@@@@@@@@@@@@@@@@@@        |
[512, 1K)              0 |                                   |
[1K, 2K)            4671 |                                   |
[2K, 4K)          342571 |@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@|
[4K, 8K)          180520 |@@@@@@@@@@@@@@@@@@@@@@@@@@@        |
[8K, 16K)          96483 |@@@@@@@@@@@@@@                     |
[16K, 32K)         25133 |@@@                                |
[32K, 64K)          8274 |@                                  |


The key thing to notice that we have up-to 128,000 pages in flight on
these random production servers. The NIC have 64 RX queue configured,
thus also 64 page_pool objects.

--Jesper




[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux