Re: [PATCH v3 2/5] ksm: support unsharing zero pages placed by KSM

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 21.10.22 14:54, David Hildenbrand wrote:
On 21.10.22 12:17, David Hildenbrand wrote:
On 11.10.22 04:22, xu.xin.sc@xxxxxxxxx wrote:
From: xu xin <xu.xin16@xxxxxxxxxx>

use_zero_pages may be very useful, not just because of cache colouring
as described in doc, but also because use_zero_pages can accelerate
merging empty pages when there are plenty of empty pages (full of zeros)
as the time of page-by-page comparisons (unstable_tree_search_insert) is
saved.

But when enabling use_zero_pages, madvise(addr, len, MADV_UNMERGEABLE) and
other ways (like write 2 to /sys/kernel/mm/ksm/run) to trigger unsharing
will *not* unshare the shared zeropage as placed by KSM (which may be
against the MADV_UNMERGEABLE documentation at least).

To not blindly unshare all shared zero_pages in applicable VMAs, the patch
introduces a dedicated flag ZERO_PAGE_FLAG to mark the rmap_items of those
shared zero_pages. and guarantee that these rmap_items will be not freed
during the time of zero_pages not being writing, so we can only unshare
the *KSM-placed* zero_pages.

The patch will not degrade the performance of use_zero_pages as it doesn't
change the way of merging empty pages in use_zero_pages's feature.

Fixes: e86c59b1b12d ("mm/ksm: improve deduplication of zero pages with colouring")
Reported-by: David Hildenbrand <david@xxxxxxxxxx>
Cc: Claudio Imbrenda <imbrenda@xxxxxxxxxxxxx>
Cc: Xuexin Jiang <jiang.xuexin@xxxxxxxxxx>
Signed-off-by: xu xin <xu.xin16@xxxxxxxxxx>
Co-developed-by: Xiaokai Ran <ran.xiaokai@xxxxxxxxxx>
Signed-off-by: Xiaokai Ran <ran.xiaokai@xxxxxxxxxx>
Co-developed-by: Yang Yang <yang.yang29@xxxxxxxxxx>
Signed-off-by: Yang Yang <yang.yang29@xxxxxxxxxx>
Signed-off-by: xu xin <xu.xin16@xxxxxxxxxx>
---
    mm/ksm.c | 136 ++++++++++++++++++++++++++++++++++++++++++-------------
    1 file changed, 105 insertions(+), 31 deletions(-)

diff --git a/mm/ksm.c b/mm/ksm.c
index 13c60f1071d8..e351d7b6d15e 100644
--- a/mm/ksm.c
+++ b/mm/ksm.c
@@ -213,6 +213,7 @@ struct ksm_rmap_item {
    #define SEQNR_MASK	0x0ff	/* low bits of unstable tree seqnr */
    #define UNSTABLE_FLAG	0x100	/* is a node of the unstable tree */
    #define STABLE_FLAG	0x200	/* is listed from the stable tree */
+#define ZERO_PAGE_FLAG 0x400 /* is zero page placed by KSM */
/* The stable and unstable tree heads */
    static struct rb_root one_stable_tree[1] = { RB_ROOT };
@@ -381,14 +382,6 @@ static inline struct ksm_rmap_item *alloc_rmap_item(void)
    	return rmap_item;
    }
-static inline void free_rmap_item(struct ksm_rmap_item *rmap_item)
-{
-	ksm_rmap_items--;
-	rmap_item->mm->ksm_rmap_items--;
-	rmap_item->mm = NULL;	/* debug safety */
-	kmem_cache_free(rmap_item_cache, rmap_item);
-}
-
    static inline struct ksm_stable_node *alloc_stable_node(void)
    {
    	/*
@@ -420,7 +413,8 @@ static inline bool ksm_test_exit(struct mm_struct *mm)
    }
/*
- * We use break_ksm to break COW on a ksm page: it's a stripped down
+ * We use break_ksm to break COW on a ksm page or KSM-placed zero page (only
+ * happen when enabling use_zero_pages): it's a stripped down
     *
     *	if (get_user_pages(addr, 1, FOLL_WRITE, &page, NULL) == 1)
     *		put_page(page);
@@ -434,7 +428,8 @@ static inline bool ksm_test_exit(struct mm_struct *mm)
     * of the process that owns 'vma'.  We also do not want to enforce
     * protection keys here anyway.
     */
-static int break_ksm(struct vm_area_struct *vma, unsigned long addr)
+static int break_ksm(struct vm_area_struct *vma, unsigned long addr,
+				     bool ksm_check_bypass)
    {
    	struct page *page;
    	vm_fault_t ret = 0;
@@ -449,6 +444,16 @@ static int break_ksm(struct vm_area_struct *vma, unsigned long addr)
    			ret = handle_mm_fault(vma, addr,
    					      FAULT_FLAG_WRITE | FAULT_FLAG_REMOTE,
    					      NULL);
+		else if (ksm_check_bypass && is_zero_pfn(page_to_pfn(page))) {
+			/*
+			 * Although it's not ksm page, it's zero page as placed by
+			 * KSM use_zero_page, so we should unshare it when
+			 * ksm_check_bypass is true.
+			 */
+			ret = handle_mm_fault(vma, addr,
+						  FAULT_FLAG_WRITE | FAULT_FLAG_REMOTE,
+						  NULL);
+		}

Please don't duplicate that page fault triggering code.

Also, please be aware that this collides with

https://lkml.kernel.org/r/20221021101141.84170-1-david@xxxxxxxxxx

Adjustments should be comparatively easy.

... except that I'm still working on FAULT_FLAG_UNSHARE support for the
shared zeropage. That will be posted soonish (within next 2 weeks).


Posted: https://lkml.kernel.org/r/20221107161740.144456-1-david@xxxxxxxxxx

With that, we can use FAULT_FLAG_UNSHARE also to break COW on the shared zeropage.

--
Thanks,

David / dhildenb





[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux