Re: [PATCH net-next RFC 0/2] add elevated refcnt support for page pool

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Fri, Jul 09, 2021 at 02:40:02PM +0800, Yunsheng Lin wrote:
> On 2021/7/9 12:15, Matteo Croce wrote:
> > On Wed, Jul 7, 2021 at 6:50 PM Marcin Wojtas <mw@xxxxxxxxxxxx> wrote:
> >>
> >> Hi,
> >>
> >>
> >> ??r., 7 lip 2021 o 01:20 Matteo Croce <mcroce@xxxxxxxxxxxxxxxxxxx> napisa??(a):
> >>>
> >>> On Tue, Jul 6, 2021 at 5:51 PM Russell King (Oracle)
> >>> <linux@xxxxxxxxxxxxxxx> wrote:
> >>>>
> >>>> On Fri, Jul 02, 2021 at 03:39:47PM +0200, Matteo Croce wrote:
> >>>>> On Wed, 30 Jun 2021 17:17:54 +0800
> >>>>> Yunsheng Lin <linyunsheng@xxxxxxxxxx> wrote:
> >>>>>
> >>>>>> This patchset adds elevated refcnt support for page pool
> >>>>>> and enable skb's page frag recycling based on page pool
> >>>>>> in hns3 drvier.
> >>>>>>
> >>>>>> Yunsheng Lin (2):
> >>>>>>   page_pool: add page recycling support based on elevated refcnt
> >>>>>>   net: hns3: support skb's frag page recycling based on page pool
> >>>>>>
> >>>>>>  drivers/net/ethernet/hisilicon/hns3/hns3_enet.c    |  79 +++++++-
> >>>>>>  drivers/net/ethernet/hisilicon/hns3/hns3_enet.h    |   3 +
> >>>>>>  drivers/net/ethernet/hisilicon/hns3/hns3_ethtool.c |   1 +
> >>>>>>  drivers/net/ethernet/marvell/mvneta.c              |   6 +-
> >>>>>>  drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c    |   2 +-
> >>>>>>  include/linux/mm_types.h                           |   2 +-
> >>>>>>  include/linux/skbuff.h                             |   4 +-
> >>>>>>  include/net/page_pool.h                            |  30 ++-
> >>>>>>  net/core/page_pool.c                               | 215
> >>>>>> +++++++++++++++++---- 9 files changed, 285 insertions(+), 57
> >>>>>> deletions(-)
> >>>>>>
> >>>>>
> >>>>> Interesting!
> >>>>> Unfortunately I'll not have access to my macchiatobin anytime soon, can
> >>>>> someone test the impact, if any, on mvpp2?
> >>>>
> >>>> I'll try to test. Please let me know what kind of testing you're
> >>>> looking for (I haven't been following these patches, sorry.)
> >>>>
> >>>
> >>> A drop test or L2 routing will be enough.
> >>> BTW I should have the macchiatobin back on friday.
> >>
> >> I have a 10G packet generator connected to 10G ports of CN913x-DB - I
> >> will stress mvpp2 in l2 forwarding early next week (I'm mostly AFK
> >> this until Monday).
> >>
> > 
> > I managed to to a drop test on mvpp2. Maybe there is a slowdown but
> > it's below the measurement uncertainty.
> > 
> > Perf top before:
> > 
> > Overhead  Shared O  Symbol
> >    8.48%  [kernel]  [k] page_pool_put_page
> >    2.57%  [kernel]  [k] page_pool_refill_alloc_cache
> >    1.58%  [kernel]  [k] page_pool_alloc_pages
> >    0.75%  [kernel]  [k] page_pool_return_skb_page
> > 
> > after:
> > 
> > Overhead  Shared O  Symbol
> >    8.34%  [kernel]  [k] page_pool_put_page
> >    4.52%  [kernel]  [k] page_pool_return_skb_page
> >    4.42%  [kernel]  [k] page_pool_sub_bias
> >    3.16%  [kernel]  [k] page_pool_alloc_pages
> >    2.43%  [kernel]  [k] page_pool_refill_alloc_cache
> 
> Hi, Matteo
> Thanks for the testing.
> it seems you have adapted the mvpp2 driver to use the new frag
> API for page pool, There is one missing optimization for XDP case,
> the page is always returned to the pool->ring regardless of the
> context of page_pool_put_page() for elevated refcnt case.
> 
> Maybe adding back that optimization will close some gap of the above
> performance difference if the drop is happening in softirq context.
> 

I think what Matteo did was a pure netstack test.  We'll need testing on
both XDP and normal network cases to be able to figure out the exact
impact.

Thanks
/Ilias
> > 
> > Regards,
> > 



[Index of Archives]     [Linux Samsung SoC]     [Linux Rockchip SoC]     [Linux Actions SoC]     [Linux for Synopsys ARC Processors]     [Linux NFS]     [Linux NILFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]


  Powered by Linux