Re: [PATCH V4 00/10] mm: page_alloc: freelist migratetype hygiene

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Mon, May 13, 2024 at 1:04 PM Johannes Weiner <hannes@xxxxxxxxxxx> wrote:
>
> On Mon, May 13, 2024 at 12:10:04PM -0600, Yu Zhao wrote:
> > On Mon, May 13, 2024 at 10:03 AM Johannes Weiner <hannes@xxxxxxxxxxx> wrote:
> > >
> > > On Fri, May 10, 2024 at 11:14:43PM -0600, Yu Zhao wrote:
> > > > On Wed, Mar 20, 2024 at 12:04 PM Johannes Weiner <hannes@xxxxxxxxxxx> wrote:
> > > > >
> > > > > V4:
> > > > > - fixed !pcp_order_allowed() case in free_unref_folios()
> > > > > - reworded the patch 0 changelog a bit for the git log
> > > > > - rebased to mm-everything-2024-03-19-23-01
> > > > > - runtime-tested again with various CONFIG_DEBUG_FOOs enabled
> > > > >
> > > > > ---
> > > > >
> > > > > The page allocator's mobility grouping is intended to keep unmovable
> > > > > pages separate from reclaimable/compactable ones to allow on-demand
> > > > > defragmentation for higher-order allocations and huge pages.
> > > > >
> > > > > Currently, there are several places where accidental type mixing
> > > > > occurs: an allocation asks for a page of a certain migratetype and
> > > > > receives another. This ruins pageblocks for compaction, which in turn
> > > > > makes allocating huge pages more expensive and less reliable.
> > > > >
> > > > > The series addresses those causes. The last patch adds type checks on
> > > > > all freelist movements to prevent new violations being introduced.
> > > > >
> > > > > The benefits can be seen in a mixed workload that stresses the machine
> > > > > with a memcache-type workload and a kernel build job while
> > > > > periodically attempting to allocate batches of THP. The following data
> > > > > is aggregated over 50 consecutive defconfig builds:
> > > > >
> > > > >                                                         VANILLA                 PATCHED
> > > > > Hugealloc Time mean                      165843.93 (    +0.00%)  113025.88 (   -31.85%)
> > > > > Hugealloc Time stddev                    158957.35 (    +0.00%)  114716.07 (   -27.83%)
> > > > > Kbuild Real time                            310.24 (    +0.00%)     300.73 (    -3.06%)
> > > > > Kbuild User time                           1271.13 (    +0.00%)    1259.42 (    -0.92%)
> > > > > Kbuild System time                          582.02 (    +0.00%)     559.79 (    -3.81%)
> > > > > THP fault alloc                           30585.14 (    +0.00%)   40853.62 (   +33.57%)
> > > > > THP fault fallback                        36626.46 (    +0.00%)   26357.62 (   -28.04%)
> > > > > THP fault fail rate %                        54.49 (    +0.00%)      39.22 (   -27.53%)
> > > > > Pagealloc fallback                         1328.00 (    +0.00%)       1.00 (   -99.85%)
> > > > > Pagealloc type mismatch                  181009.50 (    +0.00%)       0.00 (  -100.00%)
> > > > > Direct compact stall                        434.56 (    +0.00%)     257.66 (   -40.61%)
> > > > > Direct compact fail                         421.70 (    +0.00%)     249.94 (   -40.63%)
> > > > > Direct compact success                       12.86 (    +0.00%)       7.72 (   -37.09%)
> > > > > Direct compact success rate %                 2.86 (    +0.00%)       2.82 (    -0.96%)
> > > > > Compact daemon scanned migrate          3370059.62 (    +0.00%) 3612054.76 (    +7.18%)
> > > > > Compact daemon scanned free             7718439.20 (    +0.00%) 5386385.02 (   -30.21%)
> > > > > Compact direct scanned migrate           309248.62 (    +0.00%)  176721.04 (   -42.85%)
> > > > > Compact direct scanned free              433582.84 (    +0.00%)  315727.66 (   -27.18%)
> > > > > Compact migrate scanned daemon %             91.20 (    +0.00%)      94.48 (    +3.56%)
> > > > > Compact free scanned daemon %                94.58 (    +0.00%)      94.42 (    -0.16%)
> > > > > Compact total migrate scanned           3679308.24 (    +0.00%) 3788775.80 (    +2.98%)
> > > > > Compact total free scanned              8152022.04 (    +0.00%) 5702112.68 (   -30.05%)
> > > > > Alloc stall                                 872.04 (    +0.00%)    5156.12 (  +490.71%)
> > > > > Pages kswapd scanned                     510645.86 (    +0.00%)    3394.94 (   -99.33%)
> > > > > Pages kswapd reclaimed                   134811.62 (    +0.00%)    2701.26 (   -98.00%)
> > > > > Pages direct scanned                      99546.06 (    +0.00%)  376407.52 (  +278.12%)
> > > > > Pages direct reclaimed                    62123.40 (    +0.00%)  289535.70 (  +366.06%)
> > > > > Pages total scanned                      610191.92 (    +0.00%)  379802.46 (   -37.76%)
> > > > > Pages scanned kswapd %                       76.36 (    +0.00%)       0.10 (   -98.58%)
> > > > > Swap out                                  12057.54 (    +0.00%)   15022.98 (   +24.59%)
> > > > > Swap in                                     209.16 (    +0.00%)     256.48 (   +22.52%)
> > > > > File refaults                             17701.64 (    +0.00%)   11765.40 (   -33.53%)
>
> [...]
>
> > > >
> > > > This series significantly regresses Android and ChromeOS under memory
> > > > pressure. THPs are virtually nonexistent on client devices, and IIRC,
> > > > it was mentioned in the early discussions that potential regressions
> > > > for such a case are somewhat expected?
> > >
> > > This is not expected for the 10 patches here. You might be referring
> > > to the discussion around the huge page allocator series, which had
> > > fallback restrictions and many changes to reclaim and compaction.
> >
> > Right, now I remember.
> >
> > > Can you confirm that you were testing the latest patches that are in
> > > mm-stable as of today? There was a series of follow-up fixes.
> >
> > Here is what I have on top of 6.8.y, which I think includes all the
> > follow-up fixes. The performance delta was measured between 5 & 22.
> >
> >      1 mm: convert free_unref_page_list() to use folios
> >      2 mm: add free_unref_folios()
> >      3 mm: handle large folios in free_unref_folios()
> >      4 mm/page_alloc: remove unused fpi_flags in free_pages_prepare()
> >      5 mm: add alloc_contig_migrate_range allocation statistics
> >      6 mm: page_alloc: remove pcppage migratetype caching
> >      7 mm: page_alloc: optimize free_unref_folios()
> >      8 mm: page_alloc: fix up block types when merging compatible blocks
> >      9 mm: page_alloc: move free pages when converting block during isolation
> >     10 mm: page_alloc: fix move_freepages_block() range error
> >     11 mm: page_alloc: fix freelist movement during block conversion
> >     12 mm-page_alloc-fix-freelist-movement-during-block-conversion-fix
> >     13 mm: page_alloc: close migratetype race between freeing and stealing
> >     14 mm: page_alloc: set migratetype inside move_freepages()
> >     15 mm: page_isolation: prepare for hygienic freelists
> >     16 mm-page_isolation-prepare-for-hygienic-freelists-fix
> >     17 mm: page_alloc: consolidate free page accounting
> >     18 mm: page_alloc: consolidate free page accounting fix
> >     19 mm: page_alloc: consolidate free page accounting fix 2
> >     20 mm: page_alloc: consolidate free page accounting fix 3
> >     21 mm: page_alloc: change move_freepages() to __move_freepages_block()
> >     22 mm: page_alloc: batch vmstat updates in expand()
>
> It does look complete to me. Did you encounter any conflicts during
> the backport? Is there any chance you can fold the fixes into their
> respective main patches and bisect the sequence?
>
> Again, it's not expected behavior given the fairly conservative
> changes above. It sounds like a bug.
>
> > > Especially, please double check you have the follow-up fixes to
> > > compaction capturing and the CMA fallback policy. It sounds like the
> > > behavior Baolin described before the CMA fix.
> >
> > Yes, that one was included.
>
> Ok.
>
> > > Lastly, what's the base you backported this series to?
> >
> > It was 6.8, we can potentially try 6.9 this week and 6.10-rc in a few
> > weeks when it's in good shape for performance benchmarks.
>
> If you could try 6.9 as well, that would be great. I backported the
> series to 6.9 the other day (git cherry-picks from mm-stable) and I
> didn't encounter any conflicts.
>
> > > > On Android (ARMv8.2), app launch time regressed by about 7%; On
> > > > ChromeOS (Intel ADL), tab switch time regressed by about 8%. Also PSI
> > > > (full and some) on both platforms increased by over 20%. I could post
> > > > the details of the benchmarks and the metrics they measure, but I
> > > > doubt they would mean much to you. I did ask our test teams to save
> > > > extra kernel logs that might be more helpful, and I could forward them
> > > > to you.
> > >
> > > If the issue persists with the latest patches in -mm, a kernel config
> > > and snapshots of /proc/vmstat, /proc/pagetypeinfo, /proc/zoneinfo
> > > before/during/after the problematic behavior would be very helpful.
> >
> > Assuming all the fixes were included, do you want the logs from 6.8?
> > We have them available now.
>
> Yes, that would be helpful.
>
> If you have them, it would also be quite useful to have the vmstat
> before-after-test delta from a good kernel, for baseline comparison.

Sorry for taking this long -- I wanted to see if the regression is
still reproducible on v6.9.

Apparently we got the similar results on v6.9 with the following
patches cherry-picked cleanly from v6.10-rc1:

     1  mm: page_alloc: remove pcppage migratetype caching
     2  mm: page_alloc: optimize free_unref_folios()
     3  mm: page_alloc: fix up block types when merging compatible blocks
     4  mm: page_alloc: move free pages when converting block during isolation
     5  mm: page_alloc: fix move_freepages_block() range error
     6  mm: page_alloc: fix freelist movement during block conversion
     7  mm: page_alloc: close migratetype race between freeing and stealing
     8  mm: page_alloc: set migratetype inside move_freepages()
     9  mm: page_isolation: prepare for hygienic freelists
    10  mm: page_alloc: consolidate free page accounting
    11  mm: page_alloc: change move_freepages() to __move_freepages_block()
    12  mm: page_alloc: batch vmstat updates in expand()

Unfortunately I just realized that that automated benchmark didn't
collect the kernel stats before it starts (since it always starts on a
freshly booted device). While this is being fixed, I'm attaching the
kernel stats collected after the benchmark finished. I grabbed 10 runs
for each (baseline/patched), and if you need more, please let me know.
(And we should have the stats before the benchmark soon.)

Attachment: log.tar.xz
Description: Binary data


[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux