The patch titled Subject: Documentation/vm/transhuge.txt: fix trivial typos has been removed from the -mm tree. Its filename was docs-vm-transhuge-fix-few-trivial-typos.patch This patch was dropped because it was merged into mainline or a subsystem tree ------------------------------------------------------ From: SeongJae Park <sj38.park@xxxxxxxxx> Subject: Documentation/vm/transhuge.txt: fix trivial typos [akpm@xxxxxxxxxxxxxxxxxxxx: fixes per Randy] Link: http://lkml.kernel.org/r/20170405210259.2067-1-sj38.park@xxxxxxxxx Signed-off-by: SeongJae Park <sj38.park@xxxxxxxxx> Cc: Jonathan Corbet <corbet@xxxxxxx> Signed-off-by: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx> --- Documentation/vm/transhuge.txt | 10 +++++----- 1 file changed, 5 insertions(+), 5 deletions(-) diff -puN Documentation/vm/transhuge.txt~docs-vm-transhuge-fix-few-trivial-typos Documentation/vm/transhuge.txt --- a/Documentation/vm/transhuge.txt~docs-vm-transhuge-fix-few-trivial-typos +++ a/Documentation/vm/transhuge.txt @@ -266,7 +266,7 @@ for each mapping. The number of file transparent huge pages mapped to userspace is available by reading ShmemPmdMapped and ShmemHugePages fields in /proc/meminfo. -To identify what applications are mapping file transparent huge pages, it +To identify what applications are mapping file transparent huge pages, it is necessary to read /proc/PID/smaps and count the FileHugeMapped fields for each mapping. @@ -292,7 +292,7 @@ thp_collapse_alloc_failed is incremented the allocation. thp_file_alloc is incremented every time a file huge page is successfully -i allocated. + allocated. thp_file_mapped is incremented every time a file huge page is mapped into user address space. @@ -501,7 +501,7 @@ scanner can get reference to a page is g All tail pages have zero ->_refcount until atomic_add(). This prevents the scanner from getting a reference to the tail page up to that point. After the -atomic_add() we don't care about the ->_refcount value. We already known how +atomic_add() we don't care about the ->_refcount value. We already known how many references should be uncharged from the head page. For head page get_page_unless_zero() will succeed and we don't mind. It's @@ -519,8 +519,8 @@ comes. Splitting will free up unused sub Splitting the page right away is not an option due to locking context in the place where we can detect partial unmap. It's also might be -counterproductive since in many cases partial unmap unmap happens during -exit(2) if an THP crosses VMA boundary. +counterproductive since in many cases partial unmap happens during exit(2) if +a THP crosses a VMA boundary. Function deferred_split_huge_page() is used to queue page for splitting. The splitting itself will happen when we get memory pressure via shrinker _ Patches currently in -mm which might be from sj38.park@xxxxxxxxx are mm-khugepaged-add-missed-tracepoint-for-collapse_huge_page_swapin.patch -- To unsubscribe from this list: send the line "unsubscribe mm-commits" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html