Re: [PATCH v2 00/35] Speculative page faults

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Fri, Jan 28, 2022 at 05:09:31AM -0800, Michel Lespinasse wrote:
> This patchset is my take on speculative page faults (spf).
> It builds on ideas that have been previously proposed by Laurent Dufour,
> Peter Zijlstra and others before. While Laurent's previous proposal
> was rejected around the time of LSF/MM 2019, I am hoping we can revisit
> this now based on what I think is a simpler and more bisectable approach,
> much improved scaling numbers in the anonymous vma case, and the Android
> use case that has since emerged. I will expand on these points towards
> the end of this message.
> 
> The patch series applies on top of linux v5.17-rc1;
> a git tree is also available:
> git fetch https://github.com/lespinasse/linux.git v5.17-rc1-spf-anon
> 
> I would like these patches to be considered for inclusion into v5.18.
> Several android vendors are using Laurent Dufour's previous SPF work into
> their kernel tree in order to improve application startup performance,
> want to converge to an upstream accepted solution, and have reported good
> numbers with previous versions of this patchset. Also, there is a broader
> interest into reducing mmap lock dependencies in critical MM paths,
> and I think this patchset would be a good first step in that direction.
> 

I think there is serious lack of performance data here. The only
performance point offered is the Android Application Startup case.
Unfortunately, that benefit may be specific to the Zygote process that
preloads classes that may be required and listens for new applications to
start. I suspect the benefit wouldn't apply to most Linux distributions
and even JVM-based workloads are not primarily constrained by the startup
cost. Improving application start up costs is not great justification
for this level of code complexity even though I recognise why it is a
key performance indicator for Android given that startup times affect
the user experience.

Laurent's original work was partially motivated by the performance of
a proprietary application. While I cannot replicate a full production
workload as that can only be done by the company, I could do a basic
evaluation commonly conducted on standalone systems. It was extremely
fault intensive with SPF success rates greater than 96% but almost no
change in actual performance. It's perfectly possible that the application
has changed since SPF was first proposed. The developers did spend a fair
amount of effort at making the application NUMA-aware and reusing memory
more aggressively to avoid faults. It's still very fault intensive but
does not appear to suffer due to parallel memory operations guessing from
the data.

On my own tests, the only preliminary test that was a clear winner
was will-it-scale using threads for the page-fault workloads and
page-fault-test for threads. To be far, the increases there are dramatic
with a high success rate of speculative faults.

pft timings
                                 5.17.0-rc3             5.17.0-rc3
                                    vanilla        mm-spfault-v2r1
Amean     elapsed-1        32.66 (   0.00%)       32.77 *  -0.36%*
Amean     elapsed-4         9.17 (   0.00%)        8.89 *   3.07%*
Amean     elapsed-7         5.53 (   0.00%)        5.26 *   4.95%*
Amean     elapsed-12        4.13 (   0.00%)        3.50 *  15.16%*
Amean     elapsed-21        3.93 (   0.00%)        2.79 *  29.03%*
Amean     elapsed-30        4.02 (   0.00%)        2.94 *  26.79%*
Amean     elapsed-48        4.37 (   0.00%)        2.83 *  35.24%*
Amean     elapsed-79        4.13 (   0.00%)        2.17 *  47.36%*
Amean     elapsed-80        4.12 (   0.00%)        2.13 *  48.22%*

Ops SPFault Attempt                        0.00  4734439786.00
Ops SPFault Abort                          0.00     9360014.00
Ops SPFault Success                        0.00          99.80

This is the ideal case for SPF but not very realistic. Interestingly,
ebizzy barely benefitted even though it's threaded because it's not
guaranteed to be address space modification intensive.

Hackbench took a performance hit between 0-5% depending on the exact
configuration and machine used. It is threaded and had high SPF abort rates
(up to 50%). It's not a great example but it shows at least one example
where SPF hurts more than it help and there may be other applications
that are harmed by having to retry faults.

The scope of SPF is narrow relative to the much older discussion of
breaking up mmap_sem. The only time SPF benefits is when faults are racing
against parallel memory address updates holding mmap_sem for write.
That requires a threaded application that is both intense in terms of
address space updates and fault intensive. That is much narrower than
threaded applications that are address space update intensive (e.g.
using mprotect to avoid accidentally leaking data, mapping data files
for IO etc). Have we examples of realistic applications that meet all the
criteria of "threaded", "address-space intensive" and "fault intensive"
that are common enough to justify the complexity?

Admittedly, I initially just threw this series at a collection of
workloads that simply stress the allocator because it stresses faults as
a side-effect but most of them did not match the criteria for "threaded
application that is both address space update intensive and fault
intensive". I'm struggling to think of good examples although redis
is a possibility. HPC workloads like NPB parallelised with OpenMP is a
possibility but I looked at some old results and while it does trap faults,
the vast majority are related to NUMA balancing.  The other ones I normally
consider for scaling purposes are process orientated and not threads.

On the patches themselves, I'm not sure the optimisation for ignoring SPF
is guaranteed to work as mm_users could be temporarily elevated although
probably not enough to matter. I also think patch 5 stands on its own and
could be sent separately. For the others, I didn't read them in sufficient
depth but noted that the level of similar logic between speculative
and non-speculative paths could be a maintenance headache to keep the
speculative and !speculative rules in sync. I didn't see obvious problems
as such but I still think the complexity is high for a corner case.

-- 
Mel Gorman
SUSE Labs




[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux