On Tue, 2019-02-19 at 12:04 +0100, Borislav Petkov wrote: > On Mon, Jan 28, 2019 at 04:34:16PM -0800, Rick Edgecombe wrote: > > For architectures with CONFIG_ARCH_HAS_SET_ALIAS, pages can be unmapped > > briefly on the directmap, even when CONFIG_DEBUG_PAGEALLOC is not > > configured. So this changes kernel_map_pages and kernel_page_present to be > > s/this changes/change/ > > From Documentation/process/submitting-patches.rst: > > "Describe your changes in imperative mood, e.g. "make xyzzy do frotz" > instead of "[This patch] makes xyzzy do frotz" or "[I] changed xyzzy > to do frotz", as if you are giving orders to the codebase to change > its behaviour." > > Also, please end function names with parentheses. Yes, gotcha. > > defined when CONFIG_ARCH_HAS_SET_ALIAS is defined as well. It also changes > > places (page_alloc.c) where those functions are assumed to only be > > implemented when CONFIG_DEBUG_PAGEALLOC is defined. > > The commit message doesn't need to say "what" you're doing - that should > be obvious from the diff below. It should rather say "why" you're doing > it. Ok, sorry. I'll change this to be more concise. > > So now when CONFIG_ARCH_HAS_SET_ALIAS=y, hibernate will handle not present > > page when saving. Previously this was already done when > > pages > > > CONFIG_DEBUG_PAGEALLOC was configured. It does not appear to have a big > > hibernating performance impact. > > Comment over safe_copy_page Oh, yes you are right. > > Before: > > [ 4.670938] PM: Wrote 171996 kbytes in 0.21 seconds (819.02 MB/s) > > > > After: > > [ 4.504714] PM: Wrote 178932 kbytes in 0.22 seconds (813.32 MB/s) > > IINM, that's like 1734 pages more. How am I to understand this number? > > Code has called set_alias_nv_noflush() on them and safe_copy_page() now > maps them one by one to copy them to the hibernation image? > > Thx. > These are from logs hibernate generates. The concern was that hibernate could be slightly slower because of the checking of whether the pages are mapped. The bandwidth number can be used to compare, 819.02->813.32 MB/s. Some randomness must have resulted in different amounts of memory used between tests. I can just remove the log lines and include the bandwidth numbers. Thanks, Rick