Re: [PATCH v2 0/3] mm: tlb swap entries batch async release

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 





在 2024/8/1 0:17, Andrew Morton 写道:
[Some people who received this message don't often get email from akpm@xxxxxxxxxxxxxxxxxxxx. Learn why this is important at https://aka.ms/LearnAboutSenderIdentification ]

On Wed, 31 Jul 2024 21:33:14 +0800 Zhiguo Jiang <justinjiang@xxxxxxxx> wrote:

The main reasons for the prolonged exit of a background process is the
The kernel really doesn't have a concept of a "background process".
It's a userspace concept - perhaps "the parent process isn't waiting on
this process via wait()".

I assume here you're referring to an Android userspace concept?  I
expect that when Android "backgrounds" a process, it does lots of
things to that process.  Perhaps scheduling priority, perhaps
alteration of various MM tunables, etc.

So rather than referring to "backgrounding" it would be better to
identify what tuning alterations are made to such processes to bring
about this behavior.
Hi Andrew Morton,

Thank you for your review and comments.

You are right. The "background process" here refers to the process
corresponding to an Android application switched to the background.
In fact, this patch is applicable to any exiting process.

The further explaination the concept of "multiple exiting processes",
is that it refers to different processes owning independent mm rather
than sharing the same mm.

I will use "mm" to describe process instead of "background" in next
version.

time-consuming release of its swap entries. The proportion of swap memory
occupied by the background process increases with its duration in the
background, and after a period of time, this value can reach 60% or more.
Again, what is it about the tuning of such processes which causes this
behavior?
When system is low memory, memory recycling will be trigged, where
anonymous folios in the process will be continuously reclaimed, resulting
in an increase of swap entries occupies by this process. So when the
process is killed, it takes more time to release it's swap entries over
time.

Testing datas of process occuping different physical memory sizes at
different time points:
Testing Platform: 8GB RAM
Testing procedure:
After booting up, start 15 processes first, and then observe the
physical memory size occupied by the last launched process at
different time points.

Example:
The process launched last: com.qiyi.video
|  memory type  |  0min  |  1min  | BG 5min | BG 10min | BG 15min |
-------------------------------------------------------------------
|     VmRSS(KB) | 453832 | 252300 |  204364 |   199944 |  199748  |
|   RssAnon(KB) | 247348 |  99296 |   71268 |    67808 |   67660  |
|   RssFile(KB) | 205536 | 152020 |  132144 |   131184 |  131136  |
|  RssShmem(KB) |   1048 |    984 |     952 |     952  |     952  |
|    VmSwap(KB) | 202692 | 334852 |  362880 |   366340 |  366488  |
| Swap ratio(%) | 30.87% | 57.03% |  63.97% |   64.69% |  64.72%  |
min - minute.

Based on the above datas, we can know that the swap ratio occupied by
the process gradually increases over time.

Additionally, the relatively lengthy path for releasing swap entries
further contributes to the longer time required for the background process
to release its swap entries.

In the multiple background applications scenario, when launching a large
memory application such as a camera, system may enter a low memory state,
which will triggers the killing of multiple background processes at the
same time. Due to multiple exiting processes occupying multiple CPUs for
concurrent execution, the current foreground application's CPU resources
are tight and may cause issues such as lagging.

To solve this problem, we have introduced the multiple exiting process
asynchronous swap memory release mechanism, which isolates and caches
swap entries occupied by multiple exit processes, and hands them over
to an asynchronous kworker to complete the release. This allows the
exiting processes to complete quickly and release CPU resources. We have
validated this modification on the products and achieved the expected
benefits.
Dumb question: why can't this be done in userspace?  The exiting
process does fork/exit and lets the child do all this asynchronous freeing?
The logic optimization for kernel releasing swap entries cannot be
implemented in userspace. The multiple exiting processes here own
their independent mm, rather than parent and child processes share the
same mm. Therefore, when the kernel executes multiple exiting process
simultaneously, they will definitely occupy multiple CPU core resources
to complete it.
It offers several benefits:
1. Alleviate the high system cpu load caused by multiple exiting
    processes running simultaneously.
2. Reduce lock competition in swap entry free path by an asynchronous
    kworker instead of multiple exiting processes parallel execution.
Why is lock contention reduced?  The same amount of work needs to be
done.
When multiple CPU cores run to release the different swap entries belong
to different exiting processes simultaneously, cluster lock or swapinfo
lock may encounter lock contention issues, and while an asynchronous
kworker that only occupies one CPU core is used to complete this work,
it can reduce the probability of lock contention and free up the
remaining CPU core resources for other non-exiting processes to use.

3. Release memory occupied by exiting processes more efficiently.
Probably it's slightly less efficient.
We observed that using an asynchronous kworker can result in more free
memory earlier. When multiple processes exit simultaneously, due to CPU
core resources competition, these exiting processes remain in a
runnable state for a long time and cannot release their occupied memory
resources timely.

There are potential problems with this approach of passing work to a
kernel thread:

- The process will exit while its resources are still allocated.  But
   its parent process assumes those resources are now all freed and the
   parent process then proceeds to allocate resources.  This results in
   a time period where peak resource consumption is higher than it was
   before such a change.
- I don't think this modification will cause such a problem. Perhaps I
  haven't fully understood your meaning yet. Can you give me a specific
  example?
- If all CPUs are running in userspace with realtime policy
   (SCHED_FIFO, for example) then the kworker thread will not run,
   indefinitely.
- In my clumsy understanding, the execution priority of kernel threads
  should not be lower than that of the exiting process, and the
  asynchronous kworker execution should only be triggered when the
  process exits. The exiting process should not be set to SCHED_LFO,
  so when the exiting process is executed, the asynchronous kworker
  should also have opportunity to get timely execution.
- Work which should have been accounted to the exiting process will
   instead go unaccounted.
- You are right, the statistics of process exit time may no longer be
  complete.
So please fully address all these potential issues.
Thanks
Zhiguo





[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux