[PATCH 5-fix/6] mm: remove isolate_lru_page() fix

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Convert page to folio in comment and document, per Matthew.

Signed-off-by: Kefeng Wang <wangkefeng.wang@xxxxxxxxxx>
---

Andrew,
Please help to squash this fix into [PATCH 5/6] mm: remove isolate_lru_page().

 Documentation/mm/page_migration.rst  | 18 +++++++++---------
 Documentation/mm/unevictable-lru.rst |  2 +-
 mm/khugepaged.c                      |  2 +-
 mm/migrate_device.c                  |  4 ++--
 mm/swap.c                            |  4 ++--
 5 files changed, 15 insertions(+), 15 deletions(-)

diff --git a/Documentation/mm/page_migration.rst b/Documentation/mm/page_migration.rst
index 0046bbbdc65d..519b35a4caf5 100644
--- a/Documentation/mm/page_migration.rst
+++ b/Documentation/mm/page_migration.rst
@@ -63,15 +63,15 @@ and then a low level description of how the low level details work.
 In kernel use of migrate_pages()
 ================================
 
-1. Remove pages from the LRU.
+1. Remove folios from the LRU.
 
-   Lists of pages to be migrated are generated by scanning over
-   pages and moving them into lists. This is done by
+   Lists of folios to be migrated are generated by scanning over
+   folios and moving them into lists. This is done by
    calling folio_isolate_lru().
-   Calling folio_isolate_lru() increases the references to the page
-   so that it cannot vanish while the page migration occurs.
+   Calling folio_isolate_lru() increases the references to the folio
+   so that it cannot vanish while the folio migration occurs.
    It also prevents the swapper or other scans from encountering
-   the page.
+   the folio.
 
 2. We need to have a function of type new_folio_t that can be
    passed to migrate_pages(). This function should figure out
@@ -84,10 +84,10 @@ In kernel use of migrate_pages()
 How migrate_pages() works
 =========================
 
-migrate_pages() does several passes over its list of pages. A page is moved
-if all references to a page are removable at the time. The page has
+migrate_pages() does several passes over its list of folios. A folio is moved
+if all references to a folio are removable at the time. The folio has
 already been removed from the LRU via folio_isolate_lru() and the refcount
-is increased so that the page cannot be freed while page migration occurs.
+is increased so that the folio cannot be freed while folio migration occurs.
 
 Steps:
 
diff --git a/Documentation/mm/unevictable-lru.rst b/Documentation/mm/unevictable-lru.rst
index 04113c2a2f9e..8d11fe6a0854 100644
--- a/Documentation/mm/unevictable-lru.rst
+++ b/Documentation/mm/unevictable-lru.rst
@@ -80,7 +80,7 @@ on an additional LRU list for a few reasons:
  (2) We want to be able to migrate unevictable folios between nodes for memory
      defragmentation, workload management and memory hotplug.  The Linux kernel
      can only migrate folios that it can successfully isolate from the LRU
-     lists (or "Movable" pages: outside of consideration here).  If we were to
+     lists (or "Movable" folios: outside of consideration here).  If we were to
      maintain folios elsewhere than on an LRU-like list, where they can be
      detected by folio_isolate_lru(), we would prevent their migration.
 
diff --git a/mm/khugepaged.c b/mm/khugepaged.c
index b87eacfac5a7..ab646018ce25 100644
--- a/mm/khugepaged.c
+++ b/mm/khugepaged.c
@@ -628,7 +628,7 @@ static int __collapse_huge_page_isolate(struct vm_area_struct *vma,
 
 		/*
 		 * We can do it before folio_isolate_lru because the
-		 * page can't be freed from under us. NOTE: PG_lock
+		 * folio can't be freed from under us. NOTE: PG_lock
 		 * is needed to serialize against split_huge_page
 		 * when invoked from the VM.
 		 */
diff --git a/mm/migrate_device.c b/mm/migrate_device.c
index f1faff058491..8d687de88a03 100644
--- a/mm/migrate_device.c
+++ b/mm/migrate_device.c
@@ -328,8 +328,8 @@ static bool migrate_vma_check_page(struct page *page, struct page *fault_page)
 
 	/*
 	 * One extra ref because caller holds an extra reference, either from
-	 * folio_isolate_lru() for a regular page, or migrate_vma_collect() for
-	 * a device page.
+	 * folio_isolate_lru() for a regular folio, or migrate_vma_collect() for
+	 * a device folio.
 	 */
 	int extra = 1 + (page == fault_page);
 
diff --git a/mm/swap.c b/mm/swap.c
index 634fde80cd44..510573d7e82e 100644
--- a/mm/swap.c
+++ b/mm/swap.c
@@ -906,8 +906,8 @@ atomic_t lru_disable_count = ATOMIC_INIT(0);
 
 /*
  * lru_cache_disable() needs to be called before we start compiling
- * a list of pages to be migrated using folio_isolate_lru().
- * It drains pages on LRU cache and then disable on all cpus until
+ * a list of folios to be migrated using folio_isolate_lru().
+ * It drains folios on LRU cache and then disable on all cpus until
  * lru_cache_enable is called.
  *
  * Must be paired with a call to lru_cache_enable().
-- 
2.27.0





[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux