Re: [PATCH v2 2/5] mm, page_poison: use static key more efficiently

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 03.11.20 16:22, Vlastimil Babka wrote:
Commit 11c9c7edae06 ("mm/page_poison.c: replace bool variable with static key")
changed page_poisoning_enabled() to a static key check. However, the function
is not inlined, so each check still involves a function call with overhead not
eliminated when page poisoning is disabled.

Analogically to how debug_pagealloc is handled, this patch converts
page_poisoning_enabled() back to boolean check, and introduces
page_poisoning_enabled_static() for fast paths. Both functions are inlined.

The function kernel_poison_pages() is also called unconditionally and does
the static key check inside. Remove it from there and put it to callers. Also
split it to two functions kernel_poison_pages() and kernel_unpoison_pages()
instead of the confusing bool parameter.

Also optimize the check that enables page poisoning instead of debug_pagealloc
for architectures without proper debug_pagealloc support. Move the check to
init_mem_debugging_and_hardening() to enable a single static key instead of
having two static branches in page_poisoning_enabled_static().

[...]

+ * For use in fast paths after init_mem_debugging() has run, or when a
+ * false negative result is not harmful when called too early.
+ */
+static inline bool page_poisoning_enabled_static(void)
+{
+	return (static_branch_unlikely(&_page_poisoning_enabled));

As already mentioned IIRC:

return static_branch_unlikely(&_page_poisoning_enabled);

+}
  #else
  static inline bool page_poisoning_enabled(void) { return false; }
-static inline void kernel_poison_pages(struct page *page, int numpages,
-					int enable) { }
+static inline bool page_poisoning_enabled_static(void) { return false; }
+static inline void kernel_poison_pages(struct page *page, int numpages) { }
+static inline void kernel_unpoison_pages(struct page *page, int numpages) { }
  #endif
DECLARE_STATIC_KEY_FALSE(init_on_alloc);
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 44d596c9c764..fd7f9345adc0 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -775,6 +775,17 @@ void init_mem_debugging_and_hardening(void)
  			static_branch_enable(&init_on_free);
  	}
+#ifdef CONFIG_PAGE_POISONING
+	/*
+	 * Page poisoning is debug page alloc for some arches. If
+	 * either of those options are enabled, enable poisoning.
+	 */
+	if (page_poisoning_enabled() ||
+	     (!IS_ENABLED(CONFIG_ARCH_SUPPORTS_DEBUG_PAGEALLOC) &&
+	      debug_pagealloc_enabled()))
+		static_branch_enable(&_page_poisoning_enabled);
+#endif
+
  #ifdef CONFIG_DEBUG_PAGEALLOC
  	if (!debug_pagealloc_enabled())
  		return;
@@ -1260,7 +1271,8 @@ static __always_inline bool free_pages_prepare(struct page *page,
  	if (want_init_on_free())
  		kernel_init_free_pages(page, 1 << order);
- kernel_poison_pages(page, 1 << order, 0);
+	if (page_poisoning_enabled_static())
+		kernel_poison_pages(page, 1 << order);

This would look much better by having kernel_poison_pages() simply be implemented in a header, where the static check is performed.

Take a look at how it's handled in mm/shuffle.h

--
Thanks,

David / dhildenb





[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux