Re: [PATCH v5] mm: shrink skip folio mapped by an exiting process

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 08.07.24 11:46, Barry Song wrote:
On Mon, Jul 8, 2024 at 9:36 PM David Hildenbrand <david@xxxxxxxxxx> wrote:

On 08.07.24 11:04, Zhiguo Jiang wrote:
The releasing process of the non-shared anonymous folio mapped solely by
an exiting process may go through two flows: 1) the anonymous folio is
firstly is swaped-out into swapspace and transformed into a swp_entry
in shrink_folio_list; 2) then the swp_entry is released in the process
exiting flow. This will increase the cpu load of releasing a non-shared
anonymous folio mapped solely by an exiting process, because the folio
go through swap-out and the releasing the swapspace and swp_entry.

When system is low memory, it is more likely to occur, because more
backend applidatuions will be killed.

The modification is that shrink skips the non-shared anonymous folio
solely mapped by an exting process and the folio is only released
directly in the process exiting flow, which will save swap-out time
and alleviate the load of the process exiting.

Signed-off-by: Zhiguo Jiang <justinjiang@xxxxxxxx>
---

Change log:
v4->v5:
1.Modify to skip non-shared anonymous folio only.
2.Update comments for pra->referenced = -1.
v3->v4:
1.Modify that the unshared folios mapped only in exiting task are skip.
v2->v3:
Nothing.
v1->v2:
1.The VM_EXITING added in v1 patch is removed, because it will fail
to compile in 32-bit system.

   mm/rmap.c   | 13 +++++++++++++
   mm/vmscan.c |  7 ++++++-
   2 files changed, 19 insertions(+), 1 deletion(-)

diff --git a/mm/rmap.c b/mm/rmap.c
index 26806b49a86f..5b5281d71dbb
--- a/mm/rmap.c
+++ b/mm/rmap.c
@@ -843,6 +843,19 @@ static bool folio_referenced_one(struct folio *folio,
       int referenced = 0;
       unsigned long start = address, ptes = 0;

+     /*
+      * Skip the non-shared anonymous folio mapped solely by
+      * the single exiting process, and release it directly
+      * in the process exiting.
+      */
+     if ((!atomic_read(&vma->vm_mm->mm_users) ||
+             test_bit(MMF_OOM_SKIP, &vma->vm_mm->flags)) &&
+             folio_test_anon(folio) && folio_test_swapbacked(folio) &&
+             !folio_likely_mapped_shared(folio)) {

I'm currently working on moving all folio_likely_mapped_shared() under
the PTL, where we are then sure that the folio is actually mapped by
this process (e.g., no concurrent unmapping poisslbe).

Can we do the same here directly?

Implementing this is challenging because page_vma_mapped_walk() is
responsible for
traversing the page table to acquire and release the PTL. This becomes
particularly
complex with mTHP, as we may need to interrupt the page_vma_mapped_walk
loop at the first PTE.

Why can't we perform the check under the PTL and bail out? I'm missing something important.

--
Cheers,

David / dhildenb





[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux