Re: [RFC PATCH net-next v6 00/15] Device Memory TCP

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Tue, Mar 26, 2024 at 5:47 AM Yunsheng Lin <linyunsheng@xxxxxxxxxx> wrote:
>
> On 2024/3/26 8:28, Mina Almasry wrote:
> > On Tue, Mar 5, 2024 at 11:38 AM Mina Almasry <almasrymina@xxxxxxxxxx> wrote:
> >>
> >> On Tue, Mar 5, 2024 at 4:54 AM Yunsheng Lin <linyunsheng@xxxxxxxxxx> wrote:
> >>>
> >>> On 2024/3/5 10:01, Mina Almasry wrote:
> >>>
> >>> ...
> >>>
> >>>>
> >>>> Perf - page-pool benchmark:
> >>>> ---------------------------
> >>>>
> >>>> bench_page_pool_simple.ko tests with and without these changes:
> >>>> https://pastebin.com/raw/ncHDwAbn
> >>>>
> >>>> AFAIK the number that really matters in the perf tests is the
> >>>> 'tasklet_page_pool01_fast_path Per elem'. This one measures at about 8
> >>>> cycles without the changes but there is some 1 cycle noise in some
> >>>> results.
> >>>>
> >>>> With the patches this regresses to 9 cycles with the changes but there
> >>>> is 1 cycle noise occasionally running this test repeatedly.
> >>>>
> >>>> Lastly I tried disable the static_branch_unlikely() in
> >>>> netmem_is_net_iov() check. To my surprise disabling the
> >>>> static_branch_unlikely() check reduces the fast path back to 8 cycles,
> >>>> but the 1 cycle noise remains.
> >>>>
> >>>
> >>> The last sentence seems to be suggesting the above 1 ns regresses is caused
> >>> by the static_branch_unlikely() checking?
> >>
> >> Note it's not a 1ns regression, it's looks like maybe a 1 cycle
> >> regression (slightly less than 1ns if I'm reading the output of the
> >> test correctly):
> >>
> >> # clean net-next
> >> time_bench: Type:tasklet_page_pool01_fast_path Per elem: 8 cycles(tsc)
> >> 2.993 ns (step:0)
> >>
> >> # with patches
> >> time_bench: Type:tasklet_page_pool01_fast_path Per elem: 9 cycles(tsc)
> >> 3.679 ns (step:0)
> >>
> >> # with patches and with diff that disables static branching:
> >> time_bench: Type:tasklet_page_pool01_fast_path Per elem: 8 cycles(tsc)
> >> 3.248 ns (step:0)
> >>
> >> I do see noise in the test results between run and run, and any
> >> regression (if any) is slightly obfuscated by the noise, so it's a bit
> >> hard to make confident statements. So far it looks like a ~0.25ns
> >> regression without static branch and about ~0.65ns with static branch.
> >>
> >> Honestly when I saw all 3 results were within some noise I did not
> >> investigate more, but if this looks concerning to you I can dig
> >> further. I likely need to gather a few test runs to filter out the
> >> noise and maybe investigate the assembly my compiler is generating to
> >> maybe narrow down what changes there.
> >>
> >
> > I did some more investigation here to gather more data to filter out
> > the noise, and recorded the summary here:
> >
> > https://pastebin.com/raw/v5dYRg8L
> >
> > Long story short, the page_pool benchmark results are consistent with
> > some outlier noise results that I'm discounting here. Currently
> > page_pool fast path is at 8 cycles
> >
> > [ 2115.724510] time_bench: Type:tasklet_page_pool01_fast_path Per
> > elem: 8 cycles(tsc) 3.187 ns (step:0) - (measurement period
> > time:0.031870585 sec time_interval:31870585) - (invoke count:10000000
> > tsc_interval:86043192)
> >
> > and with this patch series it degrades to 10 cycles, or about a 0.7ns
> > degradation or so:
>
> Even if the absolute value for the overhead is small, we seems have a
> degradation of about 20% for tasklet_page_pool01_fast_path testcase,
> which seems scary.
>
> I am assuming that every page is recyclable for tasklet_page_pool01_fast_path
> testcase, and that code path matters for page_pool, it would be good to
> remove any additional checking for that code path.
>

We can remove the usage of static_branch_unlikely in the net_iov
check, which reduces the overhead to 1 cycle (8->9), only 12.5%
overhead. The addition of the static_branch_unlikely is not improving
the performance of devmem TCP anyway. From previous discussions with
Jesper he deemed a 1 cycle degradation acceptable, but he hasn't
commented in a while, he may have changed his mind but so far no
complaints.

We can additionally only add the check only if
CONFIG_SHARED_DMA_BUFFER is enabled. I've tested that and the fast
path goes back to 8 cycles (0 overhead). If CONFIG_SHARED_DMA_BUFFER
is not enabled then netmem can't be dmabuf anyway, so no reason to
check.

> And we already have pool->has_init_callback checking when we have to use
> a new page, it may make sense to refactor that to share the same checking
> for provider to avoid the overhead as much as possible.
>
> Also, I am not sure if it really matter that much, as with the introducing
> of netmem_is_net_iov() checking spreading in the networking, the overhead
> might add up for other case too.


-- 
Thanks,
Mina





[Index of Archives]     [Linux Samsung SoC]     [Linux Rockchip SoC]     [Linux Actions SoC]     [Linux for Synopsys ARC Processors]     [Linux NFS]     [Linux NILFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]


  Powered by Linux