On Mon, 5 Nov 2018 16:58:19 +0800 Aaron Lu <aaron.lu@xxxxxxxxx> wrote: > page_frag_free() calls __free_pages_ok() to free the page back to > Buddy. This is OK for high order page, but for order-0 pages, it > misses the optimization opportunity of using Per-Cpu-Pages and can > cause zone lock contention when called frequently. > > Paweł Staszewski recently shared his result of 'how Linux kernel > handles normal traffic'[1] and from perf data, Jesper Dangaard Brouer > found the lock contention comes from page allocator: > > mlx5e_poll_tx_cq > | > --16.34%--napi_consume_skb > | > |--12.65%--__free_pages_ok > | | > | --11.86%--free_one_page > | | > | |--10.10%--queued_spin_lock_slowpath > | | > | --0.65%--_raw_spin_lock > | > |--1.55%--page_frag_free > | > --1.44%--skb_release_data > > Jesper explained how it happened: mlx5 driver RX-page recycle > mechanism is not effective in this workload and pages have to go > through the page allocator. The lock contention happens during > mlx5 DMA TX completion cycle. And the page allocator cannot keep > up at these speeds.[2] > > I thought that __free_pages_ok() are mostly freeing high order > pages and thought this is an lock contention for high order pages > but Jesper explained in detail that __free_pages_ok() here are > actually freeing order-0 pages because mlx5 is using order-0 pages > to satisfy its page pool allocation request.[3] > > The free path as pointed out by Jesper is: > skb_free_head() > -> skb_free_frag() > -> skb_free_frag() Nitpick: you added skb_free_frag() two times, else correct. (All this stuff gets inlined by the compiler, which makes it hard to spot with perf report). > -> page_frag_free() > And the pages being freed on this path are order-0 pages. > > Fix this by doing similar things as in __page_frag_cache_drain() - > send the being freed page to PCP if it's an order-0 page, or > directly to Buddy if it is a high order page. > > With this change, Paweł hasn't noticed lock contention yet in > his workload and Jesper has noticed a 7% performance improvement > using a micro benchmark and lock contention is gone. > > [1]: https://www.spinics.net/lists/netdev/msg531362.html > [2]: https://www.spinics.net/lists/netdev/msg531421.html > [3]: https://www.spinics.net/lists/netdev/msg531556.html > Reported-by: Paweł Staszewski <pstaszewski@xxxxxxxxx> > Analysed-by: Jesper Dangaard Brouer <brouer@xxxxxxxxxx> > Signed-off-by: Aaron Lu <aaron.lu@xxxxxxxxx> > --- It is REALLY great that Aaron spotted this! (based on my analysis). This have likely been causing scalability issues on real-life network traffic, but have been hiding behind the driver level recycle tricks for micro-benchmarking. Acked-by: Jesper Dangaard Brouer <brouer@xxxxxxxxxx> > mm/page_alloc.c | 10 ++++++++-- > 1 file changed, 8 insertions(+), 2 deletions(-) > > diff --git a/mm/page_alloc.c b/mm/page_alloc.c > index ae31839874b8..91a9a6af41a2 100644 > --- a/mm/page_alloc.c > +++ b/mm/page_alloc.c > @@ -4555,8 +4555,14 @@ void page_frag_free(void *addr) > { > struct page *page = virt_to_head_page(addr); > > - if (unlikely(put_page_testzero(page))) > - __free_pages_ok(page, compound_order(page)); > + if (unlikely(put_page_testzero(page))) { > + unsigned int order = compound_order(page); > + > + if (order == 0) > + free_unref_page(page); > + else > + __free_pages_ok(page, order); > + } > } > EXPORT_SYMBOL(page_frag_free); > -- Best regards, Jesper Dangaard Brouer MSc.CS, Principal Kernel Engineer at Red Hat LinkedIn: http://www.linkedin.com/in/brouer