Hi, On 10/14/23 05:33, Muhammad Muzammil wrote: > debug_vm_pgtable.c: Fixed typo > internal.h: Fixed typo > memcontrol.c: Fixed typo > mmap.c: Fixed typo > > Signed-off-by: Muhammad Muzammil <m.muzzammilashraf@xxxxxxxxx> These all look good to me. Thanks. Acked-by: Randy Dunlap <rdunlap@xxxxxxxxxxxxx> One comment below: > --- > mm/debug_vm_pgtable.c | 4 ++-- > mm/internal.h | 2 +- > mm/memcontrol.c | 4 ++-- > mm/mmap.c | 2 +- > 4 files changed, 6 insertions(+), 6 deletions(-) > > diff --git a/mm/debug_vm_pgtable.c b/mm/debug_vm_pgtable.c > index 48e329ea5ba3..e651500e597a 100644 > --- a/mm/debug_vm_pgtable.c > +++ b/mm/debug_vm_pgtable.c > @@ -1322,8 +1322,8 @@ static int __init debug_vm_pgtable(void) > * true irrespective of the starting protection value for a > * given page table entry. > * > - * Protection based vm_flags combinatins are always linear > - * and increasing i.e starting from VM_NONE and going upto > + * Protection based vm_flags combinations are always linear > + * and increasing i.e starting from VM_NONE and going up to > * (VM_SHARED | READ | WRITE | EXEC). > */ > #define VM_FLAGS_START (VM_NONE) > diff --git a/mm/internal.h b/mm/internal.h > index b52a526d239d..b61034bd50f5 100644 > --- a/mm/internal.h > +++ b/mm/internal.h > @@ -601,7 +601,7 @@ extern bool mlock_future_ok(struct mm_struct *mm, unsigned long flags, > * range. > * "fully mapped" means all the pages of folio is associated with the page > * table of range while this function just check whether the folio range is > - * within the range [start, end). Funcation caller nees to do page table > + * within the range [start, end). Function caller needs to do page table > * check if it cares about the page table association. > * > * Typical usage (like mlock or madvise) is: > diff --git a/mm/memcontrol.c b/mm/memcontrol.c > index be2ad117515e..7987a092e530 100644 > --- a/mm/memcontrol.c > +++ b/mm/memcontrol.c > @@ -842,7 +842,7 @@ void __mod_memcg_lruvec_state(struct lruvec *lruvec, enum node_stat_item idx, > memcg = pn->memcg; > > /* > - * The caller from rmap relay on disabled preemption becase they never > + * The caller from rmap relay on disabled preemption because they never > * update their counter from in-interrupt context. For these two I don't know what that (partial) sentence is trying to say... Maybe someone else does. > * counters we check that the update is never performed from an > * interrupt context while other caller need to have disabled interrupt. > @@ -8104,7 +8104,7 @@ static struct cftype memsw_files[] = { > * > * This doesn't check for specific headroom, and it is not atomic > * either. But with zswap, the size of the allocation is only known > - * once compression has occured, and this optimistic pre-check avoids > + * once compression has occurred, and this optimistic pre-check avoids > * spending cycles on compression when there is already no room left > * or zswap is disabled altogether somewhere in the hierarchy. > */ > diff --git a/mm/mmap.c b/mm/mmap.c > index b59f5e26b6fb..27539ffe2048 100644 > --- a/mm/mmap.c > +++ b/mm/mmap.c > @@ -1223,7 +1223,7 @@ unsigned long do_mmap(struct file *file, unsigned long addr, > * Does the application expect PROT_READ to imply PROT_EXEC? > * > * (the exception is when the underlying filesystem is noexec > - * mounted, in which case we dont add PROT_EXEC.) > + * mounted, in which case we don't add PROT_EXEC.) > */ > if ((prot & PROT_READ) && (current->personality & READ_IMPLIES_EXEC)) > if (!(file && path_noexec(&file->f_path))) -- ~Randy