Re: [PATCH v7 01/10] mm/vmscan: remove unnecessary lruvec adding

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 25/12/2019 12.04, Alex Shi wrote:
We don't have to add a freeable page into lru and then remove from it.
This change saves a couple of actions and makes the moving more clear.

The SetPageLRU needs to be kept here for list intergrity. Otherwise:

     #0 mave_pages_to_lru                #1 release_pages
                                         if (put_page_testzero())
     if (put_page_testzero())
	                                    !PageLRU //skip lru_lock
                                                 list_add(&page->lru,);
     else
         list_add(&page->lru,) //corrupt

Signed-off-by: Alex Shi <alex.shi@xxxxxxxxxxxxxxxxx>
Cc: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx>
Cc: Johannes Weiner <hannes@xxxxxxxxxxx>
Cc: Hugh Dickins <hughd@xxxxxxxxxx>
Cc: yun.wang@xxxxxxxxxxxxxxxxx
Cc: linux-mm@xxxxxxxxx
Cc: linux-kernel@xxxxxxxxxxxxxxx
---
  mm/vmscan.c | 16 +++++++---------
  1 file changed, 7 insertions(+), 9 deletions(-)

diff --git a/mm/vmscan.c b/mm/vmscan.c
index 572fb17c6273..8719361b47a0 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -1852,26 +1852,18 @@ static unsigned noinline_for_stack move_pages_to_lru(struct lruvec *lruvec,

Here is another cleanup: pass only pgdat as argument.

This function reavaluates lruvec for each page under lru lock.
Probably this is redundant for now but could be used in the future (or your patchset already use that).

  	while (!list_empty(list)) {
  		page = lru_to_page(list);
  		VM_BUG_ON_PAGE(PageLRU(page), page);
+		list_del(&page->lru);
  		if (unlikely(!page_evictable(page))) {
-			list_del(&page->lru);
  			spin_unlock_irq(&pgdat->lru_lock);
  			putback_lru_page(page);
  			spin_lock_irq(&pgdat->lru_lock);
  			continue;
  		}
-		lruvec = mem_cgroup_page_lruvec(page, pgdat);
-

Please leave a comment that we must set PageLRU before dropping our page reference.

  		SetPageLRU(page);
-		lru = page_lru(page);
-
-		nr_pages = hpage_nr_pages(page);
-		update_lru_size(lruvec, lru, page_zonenum(page), nr_pages);
-		list_move(&page->lru, &lruvec->lists[lru]);
if (put_page_testzero(page)) {
  			__ClearPageLRU(page);
  			__ClearPageActive(page);
-			del_page_from_lru_list(page, lruvec, lru);
if (unlikely(PageCompound(page))) {
  				spin_unlock_irq(&pgdat->lru_lock);
@@ -1880,6 +1872,12 @@ static unsigned noinline_for_stack move_pages_to_lru(struct lruvec *lruvec,
  			} else
  				list_add(&page->lru, &pages_to_free);
  		} else {
+			lruvec = mem_cgroup_page_lruvec(page, pgdat);
+			lru = page_lru(page);
+			nr_pages = hpage_nr_pages(page);
+
+			update_lru_size(lruvec, lru, page_zonenum(page), nr_pages);
+			list_add(&page->lru, &lruvec->lists[lru]);
  			nr_moved += nr_pages;
  		}

IMHO It looks better to in this way:

SetPageLRU()

if (unlikely(put_page_testzero())) {
 <free>
 continue;
}

<add>

  	}




[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Security]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]     [Monitors]

  Powered by Linux