Re: [PATCH v19 3/8] set_memory: allow set_direct_map_*_noflush() for multiple pages

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 13.05.21 20:47, Mike Rapoport wrote:
From: Mike Rapoport <rppt@xxxxxxxxxxxxx>

The underlying implementations of set_direct_map_invalid_noflush() and
set_direct_map_default_noflush() allow updating multiple contiguous pages
at once.

Add numpages parameter to set_direct_map_*_noflush() to expose this
ability with these APIs.


[...]

Finally doing some in-depth review, sorry for not having a detailed look earlier.


-int set_direct_map_invalid_noflush(struct page *page)
+int set_direct_map_invalid_noflush(struct page *page, int numpages)
  {
  	struct page_change_data data = {
  		.set_mask = __pgprot(0),
  		.clear_mask = __pgprot(PTE_VALID),
  	};
+	unsigned long size = PAGE_SIZE * numpages;

Nit: I'd have made this const and added an early exit for !numpages. But whatever you prefer.

  	if (!debug_pagealloc_enabled() && !rodata_full)
  		return 0;
return apply_to_page_range(&init_mm,
  				   (unsigned long)page_address(page),
-				   PAGE_SIZE, change_page_range, &data);
+				   size, change_page_range, &data);
  }
-int set_direct_map_default_noflush(struct page *page)
+int set_direct_map_default_noflush(struct page *page, int numpages)
  {
  	struct page_change_data data = {
  		.set_mask = __pgprot(PTE_VALID | PTE_WRITE),
  		.clear_mask = __pgprot(PTE_RDONLY),
  	};
+	unsigned long size = PAGE_SIZE * numpages;

Nit: dito

  	if (!debug_pagealloc_enabled() && !rodata_full)
  		return 0;
return apply_to_page_range(&init_mm,
  				   (unsigned long)page_address(page),
-				   PAGE_SIZE, change_page_range, &data);
+				   size, change_page_range, &data);
  }


[...]

  extern int kernel_set_to_readonly;
diff --git a/arch/x86/mm/pat/set_memory.c b/arch/x86/mm/pat/set_memory.c
index 156cd235659f..15a55d6e9cec 100644
--- a/arch/x86/mm/pat/set_memory.c
+++ b/arch/x86/mm/pat/set_memory.c
@@ -2192,14 +2192,14 @@ static int __set_pages_np(struct page *page, int numpages)
  	return __change_page_attr_set_clr(&cpa, 0);
  }
-int set_direct_map_invalid_noflush(struct page *page)
+int set_direct_map_invalid_noflush(struct page *page, int numpages)
  {
-	return __set_pages_np(page, 1);
+	return __set_pages_np(page, numpages);
  }
-int set_direct_map_default_noflush(struct page *page)
+int set_direct_map_default_noflush(struct page *page, int numpages)
  {
-	return __set_pages_p(page, 1);
+	return __set_pages_p(page, numpages);
  }

So, what happens if we succeeded setting set_direct_map_invalid_noflush() for some pages but fail when having to split a large mapping?

Did I miss something or would the current code not undo what it partially did? Or do we simply not care?

I guess to handle this cleanly we would either have to catch all error cases first (esp. splitting large mappings) before actually performing the set to invalid, or have some recovery code in place if possible.


AFAIKs, your patch #5 right now only calls it with 1 page, do we need this change at all? Feels like a leftover from older versions to me where we could have had more than a single page.

--
Thanks,

David / dhildenb




[Index of Archives]     [Linux Ext4 Filesystem]     [Union Filesystem]     [Filesystem Testing]     [Ceph Users]     [Ecryptfs]     [AutoFS]     [Kernel Newbies]     [Share Photos]     [Security]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux Cachefs]     [Reiser Filesystem]     [Linux RAID]     [Samba]     [Device Mapper]     [CEPH Development]

  Powered by Linux