The patch titled Subject: mm: fix some typos in mm directory has been added to the -mm tree. Its filename is mm-fix-some-typo-scatter-in-mm-directory.patch This patch should soon appear at http://ozlabs.org/~akpm/mmots/broken-out/mm-fix-some-typo-scatter-in-mm-directory.patch and later at http://ozlabs.org/~akpm/mmotm/broken-out/mm-fix-some-typo-scatter-in-mm-directory.patch Before you just go and hit "reply", please: a) Consider who else should be cc'ed b) Prefer to cc a suitable mailing list as well c) Ideally: find the original patch on the mailing list and do a reply-to-all to that, adding suitable additional cc's *** Remember to use Documentation/process/submit-checklist.rst when testing your code *** The -mm tree is included into linux-next and is updated there every 3-4 working days ------------------------------------------------------ From: Wei Yang <richard.weiyang@xxxxxxxxx> Subject: mm: fix some typos in mm directory No functional change. Link: http://lkml.kernel.org/r/20190118235123.27843-1-richard.weiyang@xxxxxxxxx Signed-off-by: Wei Yang <richard.weiyang@xxxxxxxxx> Reviewed-by: Pekka Enberg <penberg@xxxxxxxxxx> Acked-by: Mike Rapoport <rppt@xxxxxxxxxxxxx> Signed-off-by: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx> --- include/linux/mmzone.h | 2 +- mm/migrate.c | 2 +- mm/mmap.c | 8 ++++---- mm/page_alloc.c | 4 ++-- mm/slub.c | 2 +- mm/vmscan.c | 2 +- 6 files changed, 10 insertions(+), 10 deletions(-) --- a/include/linux/mmzone.h~mm-fix-some-typo-scatter-in-mm-directory +++ a/include/linux/mmzone.h @@ -1301,7 +1301,7 @@ void memory_present(int nid, unsigned lo /* * If it is possible to have holes within a MAX_ORDER_NR_PAGES, then we - * need to check pfn validility within that MAX_ORDER_NR_PAGES block. + * need to check pfn validity within that MAX_ORDER_NR_PAGES block. * pfn_valid_within() should be used in this case; we optimise this away * when we have no holes within a MAX_ORDER_NR_PAGES block. */ --- a/mm/migrate.c~mm-fix-some-typo-scatter-in-mm-directory +++ a/mm/migrate.c @@ -100,7 +100,7 @@ int isolate_movable_page(struct page *pa /* * Check PageMovable before holding a PG_lock because page's owner * assumes anybody doesn't touch PG_lock of newly allocated page - * so unconditionally grapping the lock ruins page's owner side. + * so unconditionally grabbing the lock ruins page's owner side. */ if (unlikely(!__PageMovable(page))) goto out_putpage; --- a/mm/mmap.c~mm-fix-some-typo-scatter-in-mm-directory +++ a/mm/mmap.c @@ -438,7 +438,7 @@ static void vma_gap_update(struct vm_are { /* * As it turns out, RB_DECLARE_CALLBACKS() already created a callback - * function that does exacltly what we want. + * function that does exactly what we want. */ vma_gap_callbacks_propagate(&vma->vm_rb, NULL); } @@ -1012,7 +1012,7 @@ static inline int is_mergeable_vma(struc * VM_SOFTDIRTY should not prevent from VMA merging, if we * match the flags but dirty bit -- the caller should mark * merged VMA as dirty. If dirty bit won't be excluded from - * comparison, we increase pressue on the memory system forcing + * comparison, we increase pressure on the memory system forcing * the kernel to generate new VMAs when old one could be * extended instead. */ @@ -1115,7 +1115,7 @@ can_vma_merge_after(struct vm_area_struc * PPPP NNNN PPPPPPPPPPPP PPPPPPPPNNNN PPPPNNNNNNNN * might become case 1 below case 2 below case 3 below * - * It is important for case 8 that the the vma NNNN overlapping the + * It is important for case 8 that the vma NNNN overlapping the * region AAAA is never going to extended over XXXX. Instead XXXX must * be extended in region AAAA and NNNN must be removed. This way in * all cases where vma_merge succeeds, the moment vma_adjust drops the @@ -1645,7 +1645,7 @@ SYSCALL_DEFINE1(old_mmap, struct mmap_ar #endif /* __ARCH_WANT_SYS_OLD_MMAP */ /* - * Some shared mappigns will want the pages marked read-only + * Some shared mappings will want the pages marked read-only * to track write events. If so, we'll downgrade vm_page_prot * to the private version (using protection_map[] without the * VM_SHARED bit). --- a/mm/page_alloc.c~mm-fix-some-typo-scatter-in-mm-directory +++ a/mm/page_alloc.c @@ -7540,7 +7540,7 @@ static void __setup_per_zone_wmarks(void * value here. * * The WMARK_HIGH-WMARK_LOW and (WMARK_LOW-WMARK_MIN) - * deltas control asynch page reclaim, and so should + * deltas control async page reclaim, and so should * not be capped for highmem. */ unsigned long min_pages; @@ -8017,7 +8017,7 @@ bool has_unmovable_pages(struct zone *zo /* * Hugepages are not in LRU lists, but they're movable. - * We need not scan over tail pages bacause we don't + * We need not scan over tail pages because we don't * handle each tail page individually in migration. */ if (PageHuge(page)) { --- a/mm/slub.c~mm-fix-some-typo-scatter-in-mm-directory +++ a/mm/slub.c @@ -2121,7 +2121,7 @@ redo: if (!lock) { lock = 1; /* - * Taking the spinlock removes the possiblity + * Taking the spinlock removes the possibility * that acquire_slab() will see a slab page that * is frozen */ --- a/mm/vmscan.c~mm-fix-some-typo-scatter-in-mm-directory +++ a/mm/vmscan.c @@ -3616,7 +3616,7 @@ static bool kswapd_shrink_node(pg_data_t * * kswapd scans the zones in the highmem->normal->dma direction. It skips * zones which have free_pages > high_wmark_pages(zone), but once a zone is - * found to have free_pages <= high_wmark_pages(zone), any page is that zone + * found to have free_pages <= high_wmark_pages(zone), any page in that zone * or lower is eligible for reclaim until at least one usable zone is * balanced. */ _ Patches currently in -mm which might be from richard.weiyang@xxxxxxxxx are mm-slub-make-the-comment-of-put_cpu_partial-complete.patch mm-remove-extra-drain-pages-on-pcp-list.patch mm-fix-some-typo-scatter-in-mm-directory.patch mm-page_alloc-calculate-first_deferred_pfn-directly.patch